The price of gold surged above US$4,100 (A$6,300) an ounce on Wednesday for the first time, taking this year’s extraordinary rally to more than 50%.
The speed of the upswing has been much faster than analysts had predicted and brings the total gains to nearly 100% since the current run started in early 2024.
The soaring price of gold has captured investors’ hearts and wallets and resulted in long lines of people forming outside gold dealers in Sydney to get their hands on the precious metal.
What explains the soaring price of gold?
A number of reasons have been suggested to explain the current record run for gold. These include greater economic uncertainties from ballooning government debt levels and the current US government shutdown.
There are also growing worries about the independence of the US Federal Reserve. If political interference pushes down US interest rates, that could see a resurgence in inflation. Gold is traditionally seen as a hedge against inflation.
But these factors are unlikely to be the main reasons behind the meteoric rise in gold prices.
For starters, the price of gold has been on a sustained upward trajectory for the past few years. That’s well before any of those factors emerged as an issue.
The more likely explanation for the current gold price rally is growing demand from gold exchange-traded funds (ETFs).
These funds track the movements of gold, or other assets such as stocks or bonds, and are traded on the stock exchange. This makes assets such as commodities much more accessible to investors.
Now gold ETFs are widely available, gold can be traded like any other financial asset. This appears to be changing investors’ view of gold’s traditional role as a safe-haven asset in times of political or financial turmoil, when other assets such as stocks are more risky.
In addition to retail investor demand, some emerging market economies – notably China and Russia – are switching their official reserve assets out of currencies such as the US dollar and into gold.
According to the International Monetary Fund, central bank holdings of physical gold in emerging markets have risen 161% since 2006 to be around 10,300 tonnes.
To put this into perspective, emerging market gold holdings grew by only 50% over the 50 years to 2005.
Research suggests the reason for the switch into gold by emerging market economies is the increasing use of financial sanctions by the US and other governments that represent the major reserve currencies (the US dollar, euro, Japanese yen, and British pound).
Indeed, Russia became a net buyer of gold in 2006 and accelerated its gold purchases following its annexation of Crimea in 2014. It now has one of the largest stockpiles in the world.
Meanwhile, China has been selling down its holdings of US government bonds and switching to buying gold in a process referred to as “de-dollarisation”. It wants to reduce its dependency on the US currency.
Further de-dollarisation efforts by emerging market economies are expected to continue. Many of these economies now view the major Western currencies as carrying unwanted risk of financial sanctions. This is not the case with gold. This could mean financial sanctions become a less effective policy tool in the future.
Could gold have further to run?
Ongoing demand from Russia and China, and investor demand for gold ETFs, means the gold price could rally further. Both factors represent sustained increases in demand, in addition to existing demand for jewellery and electronics.
Further price rises will likely fuel increased ETF inflows via the “fear of missing out” effect.
The World Gold Council last week reported record monthly inflows in September. For the September quarter as a whole, ETF inflows topped US$26 billion and for the nine months to September, fund inflows totalled US$64 billion.
In contrast, emerging market central bank demand for gold is less affected by price and more driven by geopolitical factors, which supports increasing demand for gold.
A 2 million-year-old tooth of an early human ancestor.Fiorenza and Joannes-Boyau
When we think of lead poisoning, most of us imagine modern human-made pollution, paint, old pipes, or exhaust fumes.
But our new study, published today in Science Advances, reveals something far more surprising: our ancestors were exposed to lead for millions of years, and it may have helped shape the evolution of the human brain.
This discovery reveals the toxic substance we battle today has been intertwined with the human evolution story from its very beginning.
It reshapes our understanding of both past and present, tracing a continuous thread between ancient environments, genetic adaptation, and the unfolding evolution of human intelligence.
A poison older than humanity itself
Lead is a powerful neurotoxin that disrupts the growth and function of both brain and body. There is no safe level of lead exposure, and even the smallest traces can impair memory, learning and behaviour, especially in children. That’s why eliminating lead from petrol, paint and plumbing is one of the most important public health initiatives.
Yet while analysing ancient teeth at Southern Cross University, we uncovered something wholly unexpected: clear traces of lead sealed within the fossils of early humans and other ancestral species.
These specimens, recovered from Africa, Asia and Europe, were up to two million years old.
Using lasers finer than a strand of hair, we scanned each tooth layer by layer – much like reading the growth rings of a tree. Each band recorded a brief chapter of the individual’s life. When lead entered the body, it left a vivid chemical signature.
These signatures revealed that exposure was not rare or accidental; it occurred repeatedly over time.
Where did this lead come from?
Our findings show that early humans were never shielded from lead by the natural world. On the contrary, it was part of their world too.
The lead we found wasn’t from mining or smelting – those activities are from relatively recent human history.
Instead, it likely came from natural sources such as volcanic dust, mineral-rich soils, and groundwater flowing through lead-bearing rocks in caves. During times of drought or food shortage, early humans might have dug for water or eaten plants and roots that absorbed lead from the soil.
Every fossil tooth we study is a record of survival. A small diary of the early life of the individual, written in minerals instead of words. These ancient traces tell us that even as our ancestors struggled to find food, shelter and community, they were also navigating a world filled with unseen dangers.
From fossil teeth to living brain cells
To understand how this ancient exposure might have affected brain development, we teamed up with geneticists and neuroscientists, and used stem cells to grow tiny versions of human brain tissue, called brain organoids. These small collections of cells have many of the features of developing human brain tissue.
Brain organoids akin to Neanderthal genes. Alysson Muotri
We gave some of these organoids a modern human version of a gene called NOVA1, and others an archaic, extinct version of the gene similar to what Neanderthals and Denisovans carried. NOVA1 is a gene that orchestrates early neurodevelopment. It also initiates the response of brain cells to lead contaminants.
Then, we exposed both sets of organoids to very small, realistic amounts of lead – what ancient humans might have encountered naturally.
The difference was striking. The organoids with the ancient gene showed clear signs of stress. Neural connections didn’t form as efficiently, and key pathways linked to communication and social behaviour were disrupted. The modern-gene organoids, however, were far more resilient.
It seems that somewhere along the evolutionary path, our species may have developed a better built-in protection against the damaging effects of lead.
A story of struggle
The environment – complete with lead exposure – pushed modern human populations to adapt. Individuals with genetic variations that help them resist a threat are more likely to survive and pass those traits to future generations.
In this way, lead exposure may have been one of the many unseen forces that sculpted the human story. By favouring genes that strengthened our brains against environmental stress, it could have subtly shaped the way our neural networks developed, influencing everything from cognition to the early roots of speech and social connection.
This didn’t change the fact lead continues to be a toxic chemical. It remains one of the most damaging substances to our brains.
But evolution often works through struggle – even negative experiences can leave lasting, sometimes beneficial marks on our species.
New context for a modern problem
Understanding our long relationship with lead gives new context to a very modern problem. Despite decades of bans and regulations, lead poisoning remains a global health issue. Most recent estimates from UNICEF show one in three children worldwide still have blood lead levels high enough to cause harm.
Our discovery shows human biology evolved in a world full of chemical challenges. What changed is not the presence of toxic substances, but the intensity of our exposure.
When we look at the past through the lens of science, we don’t just uncover old bones, we uncover ourselves.
In the industrial age, we’ve massively amplified what used to be short and infrequent natural exposure. By studying how our ancestors’ bodies and genes responded to environmental stress, we can learn how to build a healthier, more resilient future.
Renaud Joannes-Boyau receives funding from the Australian Research Council.
Manish Arora receives funding from US National Institutes of Health. He is the founder of Linus Biotechnology, a start-up company that develops biomarkers for various health disorders.
Alysson R. Muotri does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
As your youth fades further into the past, you may start to fear growing older.
But research my colleague and I have recently published in the journal Intelligence shows there’s also very good reason to be excited: for many of us, overall psychological functioning actually peaks between ages 55 and 60.
And knowing this highlights why people in this age range may be at their best for complex problem-solving and leadership in the workforce.
Different types of peaks
There’s plenty of research showing humans reach their physical peak in their mid-twenties to early thirties.
A large body of research also shows that people’s raw intellectual abilities – that is, their capacity to reason, remember and process information quickly – typically starts to decline from the mid-twenties onwards.
This pattern is reflected in the real world. Athletes tend to reach their career peak before 30. Mathematicians often make their most significant contributions by their mid-thirties. Chess champions are rarely at the top of their game after 40.
Yet when we look beyond raw processing power, a different picture emerges.
From reasoning to emotional stability
In our study, we focused on well-established psychological traits beyond reasoning ability that can be measured accurately, represent enduring characteristics rather than temporary states, have well-documented age trajectories, and are known to predict real-world performance.
Our search identified 16 psychological dimensions that met these criteria.
These included core cognitive abilities such as reasoning, memory span, processing speed, knowledge and emotional intelligence. They also included the so-called “big five” personality traits – extraversion, emotional stability, conscientiousness, openness to experience, and agreeableness.
We compiled existing large-scale studies examining the 16 dimensions we identified. By standardising these studies to a common scale, we were able to make direct comparisons and map how each trait evolves across the lifespan.
Peaking later in life
Several of the traits we measured reach their peak much later in life. For example, conscientiousness peaked around age 65. Emotional stability peaked around age 75.
Less commonly discussed dimensions, such as moral reasoning, also appear to peak in older adulthood. And the capacity to resist cognitive biases – mental shortcuts that can lead us to make irrational or less accurate decisions – may continue improving well into the 70s and even 80s.
When we combined the age-related trajectories of all 16 dimensions into a theoretically and empirically informed weighted index, a striking pattern emerged.
Overall mental functioning peaked between ages 55 and 60, before beginning to decline from around 65. That decline became more pronounced after age 75, suggesting that later-life reductions in functioning can accelerate once they begin.
Getting rid of age-based assumptions
Our findings may help explain why many of the most demanding leadership roles in business, politics, and public life are often held by people in their fifties and early sixties. So while several abilities decline with age, they’re balanced by growth in other important traits. Combined, these strengths support better judgement and more measured decision-making – qualities that are crucial at the top.
Despite our findings, older workers face greater challenges re-entering the workforce after job losses. To some degree, structural factors may shape hiring decisions. For example, employers may see hiring someone in their mid-fifties as a short-term investment if retirement at 60 is likely.
In other cases, some roles have mandatory retirement ages. For example, International Civil Aviation Organisation sets a global retirement age of 65 for international airline pilots. Many countries also require air traffic controllers to retire between 56 and 60. Because these jobs demand high levels of memory and attention, such age limits are often considered justifiable.
However, people’s experiences vary.
Research has found that while some adults show declines in reasoning speed and memory, others also maintain these abilities well into later life.
Age alone, then, doesn’t determine overall cognitive functioning. So evaluations and assessments should focus on individuals’ actual abilities and traits rather than age-based assumptions.
A peak, not a countdown
Taken together, these findings highlight the need for more age-inclusive hiring and retention practices, recognising that many people bring valuable strengths to their work in midlife.
Charles Darwin published On the Origin of Species at 50. Ludwig van Beethoven, at 53 and profoundly deaf, premiered his Ninth Symphony. In more recent times, Lisa Su, now 55, led computer company Advanced Micro Devices through one of the most dramatic technical turnarounds in the industry.
History is full of people who reached their greatest breakthroughs well past what society often labels as “peak age”. Perhaps it’s time we stopped treating midlife as a countdown and started recognising it as a peak.
Gilles E. Gignac does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
You don’t need a gym membership, dumbbells, or expensive equipment to get stronger.
Since the beginning of time, we’ve had access to the one piece of equipment that is essential for strength training – our own bodies.
Strength training without the use of external forces and equipment is called “bodyweight training”.
From push-ups and squats to planks and chin-ups, bodyweight training has become one of the most popular ways to exercise because it can be done anywhere – and it’s free.
So, what is it, why does it work and how do you get started?
Bodyweight training can also be done with equipment: calisthenics is a style of bodyweight training that uses bars, rings and outdoor gyms.
What are the main forms?
Types of bodyweight training include:
calisthenics: often circuit-based (one exercise after another with minimal rest), dynamic and whole-body focused. Calisthenics is safe and effective for improving functional strength, power and speed, especially for older adults
yoga: more static or flowing poses with an emphasis on flexibility and balance. Yoga is typically safe and effective for managing and preventing musculoskeletal injuries and supporting mental health
Tai Chi: slower, more controlled movements, often with an emphasis on balance, posture and mindful movement
suspension training: using straps or rings so your body can be supported in different positions while using gravity and your own bodyweight for resistance. This type or training is suitable for older adults through to competitive athletes
resistance bands: although not strictly bodyweight only, resistance bands are a portable, low-cost alternative to traditional weights. They are safe and effective for improving strength, balance, speed and physical function.
What are the pros and cons?
There are various pros and cons to bodyweight exercises.
Pros:
builds strength: a 2025 meta-analysis of 102 studies in 4,754 older adults (aged 70 on average) found bodyweight training led to substantial strength gains – which were no different from those with free weights or machines. These benefits aren’t just for older adults, though. Using resistance bands with your bodyweight workout can be as effective as traditional training methods across diverse populations
boosts aerobic fitness: a 2021 study showed as little as 11 minutes of bodyweight exercises three times per week was effective for improving aerobic fitness
accessible and free: bodyweight training avoids common barriers to exercise such as access to equipment and facilities, which means it can be done anywhere, without a gym membership
promotes functional movement: exercises like squats and push-ups mimic everyday actions like rising from a chair or getting up from the floor.
Cons:
difficulty progressing over time: typically, we can add weight to an exercise to increase difficulty. For bodyweight training, you need to be creative, such as slowing your tempo or progressing to unilateral (one-sided or single-limb) movements
plateau risk: heavy external loads are more effective than bodyweight training for increasing maximal strength. This means if you stick to bodyweight training alone, your strength gains are more likely to plateau than if you use machines or free weights.
Tips for getting started (safely)
As with any form of exercise, it’s always best to speak to a medical professional before starting.
If you are ready to get going, here’s some tips:
start small: pick simple moves to begin and progress them as you gain strength, confidence and experience
focus on form: think quality over quantity. Completing movements with good control and body position is more important than how many you can do with poor control
progress gradually: vary the number of sets or repetitions to make your exercise more challenging. You can progress the movements from easier (push-ups on your knees) to harder (decline push-ups) as you get stronger and need more of a challenge
mix it up: use a variety of types of bodyweight training as well as targeting different muscle groups and movements
Bodyweight training means you don’t need expensive equipment to improve your health. Whether it’s squats in the park, push-ups at your children’s football game, or yoga at home, your body is a portable gym.
With consistency, creativity and time, bodyweight exercises can help you build strength and fitness.
Dan van den Hoek received research funding from Aus Active (2024) and is a member of Exercise and Sports Science Australia.
Jackson Fyfe does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
How do computers see the world? It’s not quite the same way humans do.
Recent advances in generative artificial intelligence (AI) make it possible to do more things with computer image processing. You might ask an AI tool to describe an image, for example, or to create an image from a description you provide.
As generative AI tools and services become more embedded in day-to-day life, knowing more about how computer vision compares to human vision is becoming essential.
My latest research, published in Visual Communication, uses AI-generated descriptions and images to get a sense of how AI models “see” – and discovered a bright, sensational world of generic images quite different from the human visual realm.
Humans see when light waves enter our eyes through the iris, cornea and lens. Light is converted into electrical signals by a light-sensitive surface called the retina inside the eyeball, and then our brains interpret these signals into images we see.
Our vision focuses on key aspects such as colour, shape, movement and depth. Our eyes let us detect changes in the environment and identify potential threats and hazards.
Computers work very differently. They process images by standardising them, inferring the context of an image through metadata (such as time and location information in an image file), and comparing images to other images they have previously learned about. Computers focus on things such as edges, corners or textures present in the image. They also look for patterns and try to classify objects.
Solving CAPTCHAs helps prove you’re human and also helps computers learn how to ‘see’. CAPTCHA
You’ve likely helped computers learn how to “see” by completing online CAPTCHA tests.
These are typically used to help computers differentiate between humans and bots. But they’re also used to train and improve machine learning algorithms.
So, when you’re asked to “select all the images with a bus”, you’re helping software learn the difference between different types of vehicles as well as proving you’re human.
Exploring how computers ‘see’ differently
In my new research, I asked a large language model to describe two visually distinct sets of human-created images.
One set contained hand-drawn illustrations while the other was made up of camera-produced photographs.
I fed the descriptions back into an AI tool and asked it to visualise what it had described. I then compared the original human-made images to the computer-generated ones.
The resulting descriptions noted the hand-drawn images were illustrations but didn’t mention the other images as being photographs or having a high level of realism. This suggests AI tools see photorealism as the default visual style, unless specifically prompted otherwise.
Cultural context was largely devoid from the descriptions. The AI tool either couldn’t or wouldn’t infer cultural context by the presence of, for example, Arabic or Hebrew writing in the images. This underscores the dominance of some languages, like English, in AI tools’ training data.
While colour is vital to human vision, it too was largely ignored in the AI tools’ image descriptions. Visual depth and perspective were also largely ignored.
The AI images were more boxy than the hand-drawn illustrations, which used more organic shapes.
The AI-generated images were much more boxy than the hand-drawn illustrations, which used more organic shapes and had a different relationship between positive and negative space. Left: Medar de la Cruz; right: ChatGPT
The AI images were also much more saturated than the source images: they contained brighter, more vivid colours. This reveals the prevalence of stock photos, which tend to be more “contrasty”, in AI tools’ training data.
The AI images were also more sensationalist. A single car in the original image became one of a long column of cars in the AI version. AI seems to exaggerate details not just in text but also in visual form.
The AI-generated images were more sensationalist and contrasty than the human-created photographs. Left: Ahmed Zakot; right: ChatGPT
The generic nature of the AI images means they can be used in many contexts and across countries. But the lack of specificity also means audiences might perceive them as less authentic and engaging.
Deciding when to use human or computer vision
This research supports the notion that humans and computers “see” differently. Knowing when to rely on computer or human vision to describe or create images can be a competitive advantage.
While AI-generated images can be eye-catching, they can also come across as hollow upon closer inspection. This can limit their value.
Images are adept at sparking an emotional reaction and audiences might find human-created images that authentically reflect specific conditions as more engaging than computer-generated attempts.
However, the capabilities of AI can make it an attractive option for quickly labelling large data sets and helping humans categorise them.
Ultimately, there’s a role for both human and AI vision. Knowing more about the opportunities and limits of each can help keep you safer, more productive, and better equipped to communicate in the digital age.
T.J. Thomson receives funding from the Australian Research Council. He is an affiliate with the ARC Centre of Excellence for Automated Decision Making & Society.
Half of the 11 million Swedish kronor (about A$1.8 million) prize was awarded to Joel Mokyr, a Dutch-born economic historian at Northwestern University.
The other half was jointly awarded to Philippe Aghion, a French economist at Collège de France and INSEAD, and Peter Howitt, a Canadian economist at Brown University.
Collectively, the trio’s work has examined the importance of innovation in driving sustainable economic growth. It has also highlighted that in dynamic economies, old firms die as new firms are being born.
Innovation drives sustainable growth
As noted by the Royal Swedish Academy of Sciences, economic growth has lifted billions of people out of poverty over the past two centuries. While we take this as normal, it is actually very unusual in the broad sweep of history.
The period since around 1800 is the first in human history when there has been sustained economic growth. This warns us we should not be complacent. Poor policy could see economies stagnate again.
One of the Nobel judges gave the example that in Sweden and the United Kingdom there was little improvement in living standards in the four centuries between 1300 and 1700.
Mokyr’s work showed that prior to the Industrial Revolution, innovations were more a matter of trial and error than being based on scientific understanding. He has argued that sustained economic growth would not emerge in:
a world of engineering without mechanics, iron-making without metallurgy, farming without soil science, mining without geology, water-power without hydraulics, dyemaking without organic chemistry, and medical practice without microbiology and immunology.
Mokyr gives the example of sterilising surgical instruments. This had been advocated in the 1840s or earlier. But surgeons were offended by the suggestion they might be transmitting diseases. It was only after the work of Louis Pasteur and Joseph Lister in the 1860s that the role of germs was understood and sterilisation became common.
Mokyr emphasised the importance of society being open to new ideas. As the Nobel committee put it:
practitioners, ready to engage with science, along with a societal climate embracing change, were, according to Mokyr, key reasons why the Industrial Revolution started in Britain.
Winners and losers
This year’s other two laureates, Aghion and Howitt, recognised that innovations create both winning and losing firms. In the US, about 10% of firms enter and 10% leave the market each year. Promoting economic growth requires an understanding of both processes.
Their 1992 article built on earlier work on the concept of “endogenous growth” – the idea that economic growth is
generated by factors inside an economic system, not the result of forces that impinge from outside. This earned a Nobel prize for Paul Romer in 2018.
The model created by Aghion and Howitt implies governments need to be careful how they design subsidies to encourage innovation.
If companies think that any innovation they invest in is just going to be overtaken (meaning they would lose their advantage), they won’t invest as much in innovation.
Their work also supports the idea governments have a role in supporting and retraining those workers who lose their jobs in firms that are displaced by more innovative competitors.
This will build political support for policies that encourage economic growth, as well.
‘Dark clouds’ on the horizon?
The three laureates all favour economic growth, in contrast to growing concerns about the impact of endless growth on the planet.
In an interview after the announcement, however, Aghion called for carbon pricing to make economic growth consistent with reducing greenhouse gas emissions.
He also warned about the gathering “dark clouds” of tariffs; that creating barriers to trade could reduce economic growth.
And he said we need to ensure today’s innovators do not stifle future innovators through anti-competitive practices.
The newest Nobel prize
The economics prize was not one of the five originally nominated in Swedish chemist Alfred Nobel’s will in 1895. It is formally called the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel. It was first awarded in 1969.
The awards to Mokyr and Howitt continue the pattern of the economics prize being dominated by researchers working at US universities.
It also continues the pattern of over-representation of men. Only three of the 99 economics laureates have been women.
Arguably, economics professor Rachel Griffith, rather than Mokyr, could have shared the prize with Aghion and Howitt this year. She co-authored the book Competition and Growth with Aghion, and co-wrote an article on competition with both of them.
John Hawkins does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
A ‘selfie’ taken during Webb’s testing on Earth.Ball Aerospace
After Christmas dinner in 2021, our family was glued to the television, watching the nail-biting launch of NASA’s US$10 billion (AU$15 billion) James Webb Space Telescope. There had not been such a leap forward in telescope technology since Hubble was launched in 1990.
Six months later, Webb’s first images were revealed, of the most distant galaxies yet seen. However, for our team in Australia, the work was only beginning.
We would be using Webb’s highest-resolution mode, called the aperture masking interferometer or AMI for short. It’s a tiny piece of precisely machined metal that slots into one of the telescope’s cameras, enhancing its resolution.
Our results on painstakingly testing and enhancing AMI are now released on the open-access archive arXiv in a pairof papers. We can finally present its first successful observations of stars, planets, moons and even black hole jets.
Working with an instrument a million kilometres away
Hubble started its life seeing out of focus – its mirror had been ground precisely, but incorrectly. By looking at known stars and comparing the ideal and measured images (exactly like what optometrists do), it was possible to figure out a “prescription” for this optical error and design a lens to compensate.
The primary mirror of the Webb telescope consists of 18 precisely ground hexagonal segments. NASA/Chris Gunn
By contrast, Webb is roughly 1.5 million kilometres away – we can’t visit and service it, and need to be able to fix issues without changing any hardware.
This is where AMI comes in. This is the only Australian hardware on board, designed by astronomer Peter Tuthill.
It was put on Webb to diagnose and measure any blur in its images. Even nanometres of distortion in Webb’s 18 hexagonal primary mirrors and many internal surfaces will blur the images enough to hinder the study of planets or black holes, where sensitivity and resolution are key.
AMI filters the light with a carefully structured pattern of holes in a simple metal plate, to make it much easier to tell if there are any optical misalignments.
AMI allows for a precise test pattern that can help correct any issues with JWST’s focus. Anand Sivaramakrishnan/STScI
Hunting blurry pixels
We wanted to use this mode to observe the birth places of planets, as well as material being sucked into black holes. But before any of this, AMI showed Webb wasn’t working entirely as hoped.
At very fine resolution – at the level of individual pixels – all the images were slightly blurry due to an electronic effect: brighter pixels leaking into their darker neighbours.
This is not a mistake or flaw, but a fundamental feature of infrared cameras that turned out to be unexpectedly serious for Webb.
In a new paper led by University of Sydney PhD student Louis Desdoigts, we looked at stars with AMI to learn and correct the optical and electronic distortions simultaneously.
We built a computer model to simulate AMI’s optical physics, with flexibility about the shapes of the mirrors and apertures and about the colours of the stars.
We connected this to a machine learning model to represent the electronics with an “effective detector model” – where we only care about how well it can reproduce the data, not about why.
After training and validation on some test stars, this setup allowed us to calculate and undo the blur in other data, restoring AMI to full function. It doesn’t change what Webb does in space, but rather corrects the data during processing.
It worked beautifully – the star HD 206893 hosts a faint planet and the reddest-known brown dwarf (an object between a star and a planet). They were known but out of reach with Webb before applying this correction. Now, both little dots popped out clearly in our new maps of the system.
A map of the HD 206893 system. The colourful spots show the likelihood of there being an object at that position, while B and C show the known positions of the companion planets. The wider blob means the position of C is less precisely measured, as it’s much fainter than B. This is simplified from the full version presented in the paper. Desdoigts et al., 2025
This correction has opened the door to using AMI to prospect for unknown planets at previously impossible resolutions and sensitivities.
It works not just on dots
In a companion paper by University of Sydney PhD student Max Charles, we applied this to looking not just at dots – even if these dots are planets – but forming complex images at the highest resolution made with Webb. We revisited well-studied targets that push the limits of the telescope, testing its performance.
Jupiter’s moon Io, seen by AMI on Webb. Four bright spots are visible; they are volcanoes, exactly where expected, and rotate with Io over the hour-long timelapse. Max Charles
With the new correction, we brought Jupiter’s moon Io into focus, clearly tracking its volcanoes as it rotates over an hour-long timelapse.
As seen by AMI, the jet launched from the black hole at the centre of the galaxy NGC 1068 closely matched images from much-larger telescopes.
Finally, AMI can sharply resolve a ribbon of dust around a pair of stars called WR 137, a faint cousin of the spectacular Apep system, lining up with theory.
The code built for AMI is a demo for much more complex cameras on Webb and its follow-up, Roman space telescope. These tools demand an optical calibration so fine, it’s just a fraction of a nanometre – beyond the capacity of any known materials.
Our work shows that if we can measure, control, and correct the materials we do have to work with, we can still hope to find Earth-like planets in the far reaches of our galaxy.
Benjamin Pope receives funding from the Australian Research Council and the Big Questions Institute.
Source: The Conversation – Global Perspectives – By Sarah Perkins-Kirkpatrick, Deputy Director, Engagement and Impact, The ARC Centre of Excellence for the Weather of the 21st Century, Australian National University
Massimo Valicchia/NurPhoto via Getty Images
Global warming from Woodside’s massive Scarborough gas project off Western Australia would lead to 484 additional heat-related deaths in Europe alone this century, and kill about 16 million additional corals on the Great Barrier Reef during each future mass bleaching event, our new research has revealed.
The findings were made possible by a robust, well-established formula that can determine the extent to which an individual fossil fuel project will warm the planet. The results can be used to calculate the subsequent harms to society and nature.
The results close a fundamental gap between science and decision-making about fossil fuel projects. They also challenge claims by proponents that climate risks posed by a fossil fuel project are negligible or cannot be quantified.
Each new investment in coal and gas, such as the Scarborough project, can now be linked to harmful effects both today and in the future. It means decision-makers can properly assess the range of risks a project poses to humanity and the planet, before deciding if it should proceed.
Each new investment in coal and gas extraction can now be linked to harmful effects. Shutterstock
But proponents of new fossil fuel projects in Australia routinely say their future greenhouse gas emissions are negligible compared to the scale of global emissions, or say the effects of these emissions on global warming can’t be measured.
The Scarborough project is approved for development and is expected to produce gas from next year. Located off WA, it includes wells connected by a 430km pipeline to an onshore processing facility. The gas will be liquefied and burned for energy, both in Australia and overseas. Production is expected to last more than 30 years. When natural gas is burned, more than 99% of it converts to CO₂.
Woodside – in its own evaluation of the Scarborough gas project – claimed:
it is not possible to link GHG [greenhouse gas] emissions from Scarborough with climate change or any particular climate-related impacts given the estimated […] emissions associated with Scarborough are negligible in the context of existing and future predicted global GHG concentrations.
But what if there was a way to measure the harms? That’s the question our research set out to answer.
A method already exists to directly link global emissions to the climate warming they cause. It uses scientific understanding of Earth’s systems, direct observations and climate model simulations.
According to the IPCC, every 1,000 billion tonnes of CO₂ emissions causes about 0.45°C of additional global warming. This arithmetic forms the basis for calculating how much more CO₂ humanity can emit to keep warming within the Paris Agreement goals.
But decisions about future emissions are not made at the global scale. Instead, Earth’s climate trajectory will be determined by the aggregation of decisions on many individual projects.
That’s why our research extended the IPCC method to the level of individual projects – an approach that we illustrate using the Scarborough gas project.
We estimate these emissions will cause 0.00039°C of additional global warming. Estimates such as these are typically expressed as a range, alongside a measure of confidence in the projection. In this case, there is a 66–100% likelihood that the Scarborough project will cause additional global warming of between 0.00024°C and 0.00055°C.
The human cost of global warming can be quantified by considering how many people will be left outside the “human climate niche” – in other words, the climate conditions in which societies have historically thrived.
We calculated that the additional warming from the Scarborough project will expose 516,000 people globally to a local climate that’s beyond the hot extreme of the human climate niche. We drilled down into specific impacts in Europe, where suitable health data was available across 854 cities. Our best estimate is that this project would cause an additional 484 heat-related deaths in Europe by the end of this century.
The project would cause an additional 484 heat-related deaths in Europe by the end of this century. Antonio Masiello/Getty Images
And what about harm to nature? Using research into how accumulated exposure to heat affects coral reefs, we found about 16 million corals on the Great Barrier Reef would be lost in each new mass bleaching. The existential threat to the Great Barrier Reef from human-caused global warming is already being realised. Additional warming instigated by new fossil fuel projects will ratchet up pressure on this natural wonder.
As climate change worsens, countries are seeking to slash emissions to meet their commitments under the Paris Agreement. So, we looked at the impact of Scarborough’s emissions on Australia’s climate targets.
We calculated that by 2049, the anticipated emissions from the Scarborough project alone – from production, processing and domestic use – will comprise 49% of Australia’s entire annual CO₂ emissions budget under our commitment to net-zero by 2050.
Beyond the 2050 deadline, all emissions from the Scarborough project would require technologies to permanently remove CO₂ from the atmosphere. Achieving that would require a massive scale-up of current technologies. It would be more prudent to reduce greenhouse gas emissions where possible.
‘Negligible’ impacts? Hardly
Our findings mean the best-available scientific evidence can now be used by companies, governments and regulators when deciding if a fossil fuel project will proceed.
Crucially, it is no longer defensible for companies proposing new or extended fossil fuel projects to claim the climate harms will be negligible. Our research shows the harms are, in fact, tangible and quantifiable – and no project is too small to matter.
In response to issues raised in this article, a spokesperson for Woodside said:
Woodside is committed to playing a role in the energy transition. The Scarborough reservoir contains less than 0.1% carbon dioxide. Combined with processing design efficiencies at the offshore floating production unit and onshore Pluto Train 2, the project is expected to be one of the lowest carbon intensity sources of LNG delivered into north Asian markets.
We will reduce the Scarborough Energy Project’s direct greenhouse gas emissions to as low as reasonably practicable by incorporating energy efficiency measures in design and operations. Further information on how this is being achieved is included in the Scarborough Offshore Project Proposal, sections 4.5.4.1 and 7.1.3 and in approved Australian Government environment plans, available on the regulator’s website.
A report prepared by consultancy ACIL Allen has found that Woodside’s Scarborough Energy Project is expected to generate an estimated A$52.8 billion in taxation and royalty payments, boost GDP by billions of dollars between 2024 and 2056 and employ 3,200 people during peak construction in Western Australia.
Sarah Perkins-Kirkpatrick receives funding from the Australian Research Council
Andrew King receives funding from the Australian Research Council (Future Fellowship and Centre of Excellence for 21st Century Weather) and the National Environmental Science Program.
Nicola Maher receives funding from the Australian Research Council.
Wesley Morgan is a fellow with the Climate Council of Australia
The trade dispute between the United States and China has resumed. US President Donald Trump lashed out at the weekend at Beijing’s planned tightening of restrictions over crucial rare-earth minerals.
In response, Trump has threatened 100% tariffs on Chinese imports.
But with the higher tariff rate not due to start until November 1, and the Chinese controls on December 1, there is still time for negotiation.
This is no longer a trade dispute; it has escalated into a race for control over supply chains, and the rules that govern global trade.
For Australia, this provides an opening to build capacity at home in minerals refining and rare-earths processing. But we also need to keep access to our biggest market – China.
A long-running battle
Since 2018, the US has sought to choke off China’s access to semiconductors and chipmaking tools by restricting exports.
China last week tightened its export controls on rare earth minerals that are essential for the technology, automotive and defence industries. Foreign companies now need permission to export products that derive as little as 0.1% of their value from China-sourced rare earths.
Rare earths are essential to many modern technologies. They enable high-performance magnets for EVs and wind turbines, lasers in advanced weapons, and the polishing of semiconductor wafers. An F-35 fighter jet contains about 417 kilograms of rare earths.
By targeting inputs rather than finished goods, China extends its reach across production lines in any foreign factories that use Chinese rare earths in chips (including AI), automotive, defence and consumer electronics.
A part of US President Donald Trump’s social media post announcing new tariffs on China.
Who holds the upper hand: chips or rare earths?
The US plan is simple: control the key tools and software for making top-end semiconductor chips so China can’t move as fast on cutting-edge technology.
Under that pressure, China is filling the gaps. It’s far more self-sufficient in chips than ten years ago. It now makes more of its own tools and software, and produces “good-enough” chips for cars, factories and gadgets to withstand US sanctions.
Rare earths aren’t literally “rare”; their value lies in complex, costly and polluting separation and purification processes. China has cornered the industry, helped by industry policies and subsidies. China accounts for 60–70% of all mining and more than 90% of rare earths refining.
Its dominance reflects decades-long investment, scale and an early willingness to bear heavy environmental costs. Building a China-free supply chain will take years, even if Western countries can coordinate smoothly.
A window for Australia?
Australia is seen as a potential beneficiary. As Prime Minister Anthony Albanese prepares to meet Trump on October 20 in Washington, many argue the rare-earths clash offers a diplomatic opening.
Trade Minister Don Farrell says Australia is a reliable supplier that can “provide alternatives to the rest of the world”. Australia’s ambassador to the US, Kevin Rudd, has made the same case.
The logic seems compelling: leverage Australia’s mineral wealth for strategic gain with its closest security partner. But that narrative is simplistic. It risks drifting from industrial and economic reality.
The first hard truth is that Australia has the resources, but doesn’t control the market. It is a top-five producer of 14 minerals, including lithium, cobalt and rare earths, yet it doesn’t dominate any of them. Australia’s strength is in mining and extraction, rather than processing.
Here lies the strategic paradox: Australia ships the majority of its minerals to China for processing that turns ore into high-purity metals and chemicals. Building alternative, China-free supply chains to reduce US reliance on China would decouple Australia from its main customer for raw materials.
Demand from the defence sector is not enough. The US Department of Defense accounts for less than 5% of global demand for most critical minerals.
The real driver is the heavy demand from clean energy and advanced technology, including EVs, batteries and solar. China commands those markets, creating a closed-loop ecosystem that pulls in Australia’s materials and exports finished goods. Recreating that integrated system in five to ten years, after Beijing spent decades building it, is wishful thinking.
There will be no simple winner
The US restrictions on chips and the Chinese controls over rare earths are twin levers in the contest between two great powers. Each wants to lead in technology – and to set the rules over global supply chains.
We’ve entered a period where control of a few key inputs, tools and routes gives countries leverage. Each side is probing those “chokepoints” in the other’s supply chains for technology and materials – and using them as weapons. In the latest stand-off, Trump has floated export controls on Boeing parts to China. Chinese airlines are major Boeing customers, so any parts disruption would hit China’s aviation sector hard.
There will be no simple winner. Countries and firms are being pulled into two parallel systems: one centred on US chip expertise, the other on China’s materials power. This is not a clean break. It will be messier, costlier and less efficient, where political risk often outweighs commercial logic.
The question for Australia is not how fast it can build, but how well it balances security aims with market realities.
Marina Yue Zhang does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Two books on the recent Erin Patterson trial have just been published, both by experienced true crime writers. Both are meticulously researched (primarily relying on the transcripts of evidence), well written and eminently readable. The timing of their publication is remarkable, given it’s only been a month since the sentencing. Having said that, neither book seems rushed.
Greg Haddrick, who writes from the perspective of “a fictional juror” in The Mushroom Murders, also wrote In the Dead of Night, about the 2020 murder of Russell Hill and Carol Clay in Victoria’s remote Wonnangatta Valley. Duncan McNab, author of Recipe for Murder, is a former detective, a private investigator (specialising in criminal defence) and an investigative journalist.
Review: The Mushroom Murders – Greg Haddrick (Allen & Unwin); Recipe for Murder: The poisonous truth behind Erin Patterson – the mushroom murderer of Leongatha – Duncan McNab (Hachette)
I enjoyed both books immensely – not for the horror of the tale itself, but for the insightful way they tackle this most intriguing story. Neither book offers anything new by way of evidence, as what is written is all on the public record. But readers will nevertheless find a great deal to interest them in the narrative of both authors, especially their descriptions of the legal wranglings and media frenzy.
Haddrick’s imagined perspective as a (female) juror who runs a (fictitious) picture-framing business in Morwell is an intriguing literary device. He is careful to advise readers that he neither approached nor spoke to any of the real jurors. Indeed, the law does not allow such an approach, given the requirement of the confidentiality of their deliberations.
The entire book is narrated by this fictional person. “I wanted readers to feel they were on that journey with the jury,” Haddrick explains in his preface. All the evidence, he says, “comes directly from, and only from, the evidence those jurors heard and saw during the trial”. Some readers may find his choice of storytelling technique somewhat disingenuous, but the narrative gives the book an impressive immediacy.
Haddrick writes, again in his preface: “Like all of us, our narrator has her own strengths, flaws and opinions, and she does speculate. But where she does engage in speculation, it is clearly identified as separate from the evidence that leads to her verdict — and it reveals some surprising insights into police methodology along the way.” True, there is much written about some very impressive policing, but I found none of it surprising.
On the first page of Haddrick’s book, we read: “A family lunch. Three murders. What really happened?” One might opine that it’s a tad misleading to state that a book narrated by a fictional juror and engaging in speculation regarding (her) thought processes can tell us “what really happened”. But this gradual, unfolding narrative from the perspective of an (albeit imaginary) juror remains a compelling tale if the reader is happy to suspend disbelief and see the trial through her (fictional) eyes.
Does this literary device pose an ethical issue? Perhaps so (especially if it leads to misleading understandings of the facts or the law), but in the legal sense there is no difficulty as long as it’s clearly explained.
The setting
On July 29 2023, in the small rural Victorian town of Leongatha, members of Patterson’s extended family sat down for a casual Saturday lunch of beef Wellington. As Haddrick quips in his opening pages, this has become the most talked-about meal since the Last Supper.
The next day, all four guests were hospitalised. Within a week, three of them were dead: Erin’s mother-in-law Gail Patterson, her husband Don Patterson, and Gail’s sister, Heather Wilkinson. Ian Wilkinson, Heather’s husband, was fighting for his life.
When Patterson was arrested and charged with their murders, the case captured media attention around the world. After a trial that lasted 11 weeks and produced 3,500 pages of transcript, a jury of 12 (seven men and five women) determined she had deliberately poisoned her estranged husband’s parents, and his aunt and uncle, leading to the deaths of three of them and the life-threatening illness of the fourth.
As McNab writes,
Patterson’s cruelty in watching four people who had been nothing but kind and loving to her eat a meal she knew would kill them was the ultimate act of betrayal.
Mushroom murders – a fictional juror’s view
Haddrick’s The Mushroom Murders principally tells what was going through the mind of his fictional juror as she heard the evidence. Readers interested in the way criminal evidence is presented will get a clear picture of that process. His speculation of the iterative process that possibly goes through a juror’s thinking as the evidence unfolds is well constructed. But it is only speculative.
The first third of the book takes readers through all the evidence of Erin’s relationship with her estranged husband, Simon. It was a fractured and, at times, tempestuous marriage. It is easy to gloss over this narrative since it reads like Days of Our Lives, but it is important in the story. At this stage, the fictional juror is entirely sympathetic to Erin, taking to heart the trial judge’s reminder to the jury, in his preliminary remarks, of the importance of the presumption of innocence.
But as the story continues and the beef Wellington is served (it arrives halfway through the book), the tempo increases. With a third of the book to go, Haddrick has his juror thinking that sticking with “Team Erin” was becoming more difficult.
It was getting harder and harder to keep “presuming innocence”. At so many moments in the story, there were no reasonable explanations for her behaviour compatible with innocence.
Those moments continue all the way to the book’s conclusion.
Haddrick’s epilogue is devoted to the post-trial release of evidence that had been excluded (and charges dropped) regarding three allegations that Erin attempted to poison Simon during their marriage. His narrator’s strong opinion is that the trial judge’s ruling in a preliminary hearing to exclude these allegations from the trial treated the jury as mugs.
I know we had been told at the beginning of the trial to put those dropped charges out of our minds, but I’m sorry. If it leads to a situation like this, where the blindingly obvious question never gets asked, that’s just legal nonsense.
That might be his fictional juror’s feeling, but Justice Beale’s direction in the preliminary hearing had been vindicated by a three-member Court of Criminal Appeal. The law on this subject is clear: the High Court ruled in 1995 that prejudicial evidence (such as the unsubstantiated allegations regarding three earlier attempts to poison Simon) is inadmissible unless the judge considers that evidence has very strong probative value, that is, evidence sufficiently useful to prove something important in the case at hand.
In this case, he did not. If those charges were to be pursued, that would need to happen in a separate trial.
Recipe for Murder
Duncan McNab’s Recipe for Murder does not have the first-hand immediacy of The Mushroom Murders, but it, too, is compelling. Indeed, some readers may find McNab’s analysis more insightful, as it is not underpinned by Haddrick’s literary musing.
McNab’s book shares a similar structure to Haddrick’s. There is a description of the family, their church, the often fractured relationship between Simon and Erin, their multiple marital separations, their children, their Christian faith (or, in Erin’s case, ostensible atheism) and their finances.
McNab takes readers through the evidence of Erin’s fascination with true crime, and her ill-health self-diagnoses (including ovarian cancer and heart issues). He describes her relationships with her own parents, both deceased. In messages to friends, he reveals, Patterson called her mother “essentially a cold robot” and said “Dad was a doormat”. He also reveals her propensity to be loose with the truth, including her being sacked from her job as an air traffic controller for lying about her work hours.
He departs from the formal record of the trial evidence in explaining Justice Beale’s ruling in the fast-tracked preliminary hearing, namely that the allegations relating to Simon’s previous illnesses (the alleged attempted poisonings) could not be tried in the “beef Wellington” trial.
Moreover, he explains the ruling on the location of the trial (Morwell, not Melbourne), noting that under the Victorian Criminal Procedure Act unless there are strong reasons related to unfairness, the trial should take place at the court most proximate to where the alleged offending occurred. (By contrast, Haddrick’s book does not deal with the reasoning in these preliminary matters at all.)
Next in the McNab narrative comes the meal, the deaths, and the investigations, including a close look at the forensic science evidence, such as the sighting of deathcap mushrooms growing in rural Victoria, and Erin’s phone being “pinged” in the vicinity. Then follows the funerals – and inevitably, the arrest of Erin Patterson.
Because he is writing with the value of hindsight, McNab is never in doubt about the correctness of the verdict.
Read one, not both
Recipe for Murder is significantly more detailed than The Mushroom Murders in relation to the summing up to the jury by prosecution counsel, defence counsel and finally the judge. This process would take nearly six days once the examinations in chief, the cross examinations and reexaminations finally came to an end.
On day 40 of the trial, the jury (after the balloting out of two jurors, to reduce the number to 12) retires to consider its verdict. It returns nearly six days later with guilty verdicts on all four charges.
Surprisingly, neither author makes any comment that this was an extraordinarily lengthy process, other than McNab’s quip that there was “a vast amount of evidence.” Yes, that may be true, but as both authors admit, that evidence was very persuasive.
Importantly, Recipe for Murder then pays significant attention to the victim impact statements. There were 28 tendered to the court, seven of them read aloud; some, like Ian Wilkinson’s, by their authors; some, like Simon Patterson’s, by proxies. Simon’s statement referred to his children:
Like all of us, they face the daunting challenge of trying to comprehend what she has done. The grim reality is they live in an irreparably broken home with a solo parent when almost everybody knows their mother murdered their grandparents.
Directing his attention to the sentencing hearing, McNab tells us Justice Beale heard from defence counsel that Erin would likely spend 22 hours a day in her cell. Beale took that into account. There was to be, he announced, three life sentences (the maximum penalty under Victorian legislation) to be served concurrently. The non-parole period was set at 33 years. (The Victorian Director of Public Prosecutions is now appealing that non-parole period on the basis that it was overly lenient.)
McNab concludes his discussion of the sentencing remarks with suitably understated drama: “An emotionless Erin Patterson was led from the courtroom.”
Both books are commendable. But there is no value in reading both, as they cover much of the same material. If I had to pick one as a tool for teaching students the art of examination (and cross examination) of witnesses, and the processes of trial, verdict and sentencing, McNab’s Recipe for Murder would be my choice.
A third book on the trial will be published next month – The Mushroom Tapes, by Helen Garner, Chloe Hooper and Sarah Krasnostein. It is clear there is more for us to read, and perhaps learn, about what unfolded in the Victorian Supreme Court, sitting at Morwell, during the winter of 2025.
Rick Sarre does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.