Human ancestors were exposed to lead millions of years ago, and it shaped our evolution

Source: The Conversation – Global Perspectives – By Renaud Joannes-Boyau, Professor in Geochronology and Geochemistry, Southern Cross University

A 2 million-year-old tooth of an early human ancestor. Fiorenza and Joannes-Boyau

When we think of lead poisoning, most of us imagine modern human-made pollution, paint, old pipes, or exhaust fumes.

But our new study, published today in Science Advances, reveals something far more surprising: our ancestors were exposed to lead for millions of years, and it may have helped shape the evolution of the human brain.

This discovery reveals the toxic substance we battle today has been intertwined with the human evolution story from its very beginning.

It reshapes our understanding of both past and present, tracing a continuous thread between ancient environments, genetic adaptation, and the unfolding evolution of human intelligence.

A poison older than humanity itself

Lead is a powerful neurotoxin that disrupts the growth and function of both brain and body. There is no safe level of lead exposure, and even the smallest traces can impair memory, learning and behaviour, especially in children. That’s why eliminating lead from petrol, paint and plumbing is one of the most important public health initiatives.

Yet while analysing ancient teeth at Southern Cross University, we uncovered something wholly unexpected: clear traces of lead sealed within the fossils of early humans and other ancestral species.

These specimens, recovered from Africa, Asia and Europe, were up to two million years old.

Using lasers finer than a strand of hair, we scanned each tooth layer by layer – much like reading the growth rings of a tree. Each band recorded a brief chapter of the individual’s life. When lead entered the body, it left a vivid chemical signature.

These signatures revealed that exposure was not rare or accidental; it occurred repeatedly over time.

Where did this lead come from?

Our findings show that early humans were never shielded from lead by the natural world. On the contrary, it was part of their world too.

The lead we found wasn’t from mining or smelting – those activities are from relatively recent human history.

Instead, it likely came from natural sources such as volcanic dust, mineral-rich soils, and groundwater flowing through lead-bearing rocks in caves. During times of drought or food shortage, early humans might have dug for water or eaten plants and roots that absorbed lead from the soil.

Every fossil tooth we study is a record of survival. A small diary of the early life of the individual, written in minerals instead of words. These ancient traces tell us that even as our ancestors struggled to find food, shelter and community, they were also navigating a world filled with unseen dangers.

From fossil teeth to living brain cells

To understand how this ancient exposure might have affected brain development, we teamed up with geneticists and neuroscientists, and used stem cells to grow tiny versions of human brain tissue, called brain organoids. These small collections of cells have many of the features of developing human brain tissue.

Brain organoids akin to Neanderthal genes.
Alysson Muotri

We gave some of these organoids a modern human version of a gene called NOVA1, and others an archaic, extinct version of the gene similar to what Neanderthals and Denisovans carried. NOVA1 is a gene that orchestrates early neurodevelopment. It also initiates the response of brain cells to lead contaminants.

Then, we exposed both sets of organoids to very small, realistic amounts of lead – what ancient humans might have encountered naturally.

The difference was striking. The organoids with the ancient gene showed clear signs of stress. Neural connections didn’t form as efficiently, and key pathways linked to communication and social behaviour were disrupted. The modern-gene organoids, however, were far more resilient.

It seems that somewhere along the evolutionary path, our species may have developed a better built-in protection against the damaging effects of lead.

A story of struggle

The environment – complete with lead exposure – pushed modern human populations to adapt. Individuals with genetic variations that help them resist a threat are more likely to survive and pass those traits to future generations.

In this way, lead exposure may have been one of the many unseen forces that sculpted the human story. By favouring genes that strengthened our brains against environmental stress, it could have subtly shaped the way our neural networks developed, influencing everything from cognition to the early roots of speech and social connection.

This didn’t change the fact lead continues to be a toxic chemical. It remains one of the most damaging substances to our brains.

But evolution often works through struggle – even negative experiences can leave lasting, sometimes beneficial marks on our species.

New context for a modern problem

Understanding our long relationship with lead gives new context to a very modern problem. Despite decades of bans and regulations, lead poisoning remains a global health issue. Most recent estimates from UNICEF show one in three children worldwide still have blood lead levels high enough to cause harm.

Our discovery shows human biology evolved in a world full of chemical challenges. What changed is not the presence of toxic substances, but the intensity of our exposure.

When we look at the past through the lens of science, we don’t just uncover old bones, we uncover ourselves.

In the industrial age, we’ve massively amplified what used to be short and infrequent natural exposure. By studying how our ancestors’ bodies and genes responded to environmental stress, we can learn how to build a healthier, more resilient future.

The Conversation

Renaud Joannes-Boyau receives funding from the Australian Research Council.

Manish Arora receives funding from US National Institutes of Health. He is the founder of Linus Biotechnology, a start-up company that develops biomarkers for various health disorders.

Alysson R. Muotri does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Human ancestors were exposed to lead millions of years ago, and it shaped our evolution – https://theconversation.com/human-ancestors-were-exposed-to-lead-millions-of-years-ago-and-it-shaped-our-evolution-267318

The world wide web was meant to unite us, but is tearing us apart instead. Is there another way?

Source: The Conversation – Global Perspectives – By George Buchanan, Deputy Dean, School of Computing Technologies, RMIT University

The hope of the world wide web, according to its creator Tim Berners-Lee, was that it would make communication easier, bring knowledge to all, and strengthen democracy and connection. Instead, it seems to be driving us apart into increasingly small and angry splinter groups. Why?

We have commonly blamed online echo chambers, digital spaces filled with people who largely share the same beliefs – or filter bubbles, the idea that algorithms tend to show us content we are likely to agree with.

However, these concepts have both been challenged by a number of studies. A 2022 study led by one of us (Dana), which tracked the social media behaviours of ten respondents, found people often engage with content they disagree with – even going so far as to seek it out.

When an individual engages with a disagreeable post on social media – whether it’s “rage bait” or something else that offends you – it drives income for the platform. But on a societal scale, it drives antisocial outcomes.

One of the worst of these outcomes is “affective polarisation”, where we like people who think similarly to us, and dislike or resent people who hold different views. Research and global surveys both show this form of polarisation is growing across the world.

Changing the economics of social media platforms would likely reduce online polarisation. But this won’t be possible without intervention from governments, and each of us.

How our views get reinforced online

Social media use has been associated with growing affective polarisation.

Online, we can be influenced by the opinions of people we agree or disagree with – even on topics we had previously been neutral towards. For instance, if there’s an influencer you admire, and they express a view on a new law you hadn’t thought much about, you’re more likely to adopt their viewpoint on it.

When this happens on a large scale, it gradually separates us into ideological tribes that disagree on multiple issues: a phenomenon known as “partisan sorting”.

Research shows our encounters on social media can lead to us developing new views on a topic. It also shows how any searches we do to get more insight can solidify these emerging views, as the results are likely to contain the same language as the original post that gave us the view in the first place.

For example, if you see a post that inaccurately claims taking paracetamol during pregnancy will give your baby autism, and you search for other posts using the key words “paracetamol pregnancy autism”, you will probably get more of the same.

Being in a heightened emotional state has been linked to higher susceptibility to believing false or “fake” content.

Why are we fed polarising content?

This is where the economics of the internet come in. Divisive and emotionally laden posts are more likely to get engagement (such as likes, shares and comments), especially from people who strongly agree or disagree, and from provocateurs. Platforms will then show these posts to more people, and the cycle of engagement continues.

Social media companies leverage our tendency towards divisive content to drive engagement, as this leads to more advertising money for them. According to a 2021 report from the Washington Post, Facebook’s ranking algorithm once treated emoji reactions (including anger) as five times more valuable than “likes”.

Simulation-based studies have also revealed how anger and division drive online engagement. One simulation (in a yet to be peer-reviewed paper) used bots to show that any platform measuring its success and income by engagement (currently all of them) would be most successful if it boosted divisive posts.

Where are we headed?

That said, the current state of social media need not also be its future.

People are now spending less time on social media than they used to. According to a recent report from the Financial Times, time spent on social media peaked in 2022 and has since been declining. By the end of 2024, users aged 16 and older spent 10% less time on social platforms than they did in 2022.

Droves of users are also leaving bigger “mainstream” platforms for ones that reflect their own political leanings, such as the left-wing BlueSky, or the right-wing Truth Social. While this may not help with polarisation, it signals many people are no longer satisfied with the social media status quo.

Internet-fuelled polarisation has also resulted in real costs to government, both in mental health and police spending. Consider recent events in Australia, where online hate and misinformation have played a role in neo-Nazi marches, and the cancellation of events run by the LGBTQIA+ community, due to threats.

For those of us who remain on social media platforms, we can individually work to change the status quo. Research shows greater tolerance for different views among online users can slow down polarisation. We can also give social media companies less signals to work from, by not re-sharing or promoting content that’s likely to make others irate.

Fundamentally, though, this is a structural problem. Fixing it will mean reframing the economics of online activity to increase the potential for balanced and respectful conversations, and decrease the reward for producing and/or engaging with rage bait. And this will almost certainly require government intervention.

When other products have caused harm, governments have regulated them and taxed the companies responsible. Social media platforms can also be regulated and taxed. It may be hard, but not impossible. And it’s worth doing if we want a world where we’re not all one opinion away from becoming an outcast.

The Conversation

Dana McKay has received funding from the Australian Research Council, the Australian Digital Health Agency, and Google (this last ruing her PhD).

George Buchanan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. The world wide web was meant to unite us, but is tearing us apart instead. Is there another way? – https://theconversation.com/the-world-wide-web-was-meant-to-unite-us-but-is-tearing-us-apart-instead-is-there-another-way-266253

Peter Thiel thinks Greta Thunberg could be the Antichrist. Here’s how three religions actually describe him

Source: The Conversation – Global Perspectives – By Philip C. Almond, Emeritus Professor in the History of Religious Thought, The University of Queensland

In a series of four lectures, Silicon Valley tech billionaire Peter Thiel has been opining on the Antichrist.

Thiel’s amateur riffing identifies the Antichrist with anyone or any institution that he dislikes – from environmental activist Greta Thunberg to governmental attempts to regulate artificial intelligence.

Thiel’s overall definition of the Antichrist “is that of an evil king or tyrant or anti-messiah who appears in the end times”.

Thiel is aligning himself with a long tradition of identifying the Antichrist as a despotic world emperor who would arise at the end of the world.

By the ninth century, influenced by the Christian idea of the Antichrist, Islam and Judaism each had their own Antichrist figures who would come at the end of history – in Islam, al-Dajjal (the Deceiver), in Judaism, Armilus.

The Christian Antichrist

Drawing together 800 years of earlier Antichrist speculations, the Benedictine monk Adso of Montier-en-der wrote the first life of the Antichrist 1,100 years ago. According to Adso, the Antichrist would be a tyrannical evil king who would corrupt all those around him.

The Antichrist was the opposite of everything Christ-like. According to Christianity, Christ was fully human yet absolutely “sin free”. The Antichrist, too, was fully human, but completely “sin full” – not so much a supernatural being who became flesh as a human being who became completely demonised.

Born in Babylon (present day Iraq), the Antichrist was destined to come at the end of the world and rule over the earth from Jerusalem until he and his supporters were defeated by the forces of Christ at the battle of Armageddon.

Al-Dajjal, the Muslim Antichrist

Although the Dajjal does not appear in the Qur’an, he plays an important role in later Muslim understanding of the end of the world in the Hadith literature – the later collections of the sayings and deeds of Muhammad.

Dajjal was large and stout, of a red complexion, blind in one eye that appeared like a swollen grape, and had big curly hair. His most distinctive feature was the word Kafir (disbeliever) written on his forehead.

There is no declaration in the Hadith literature that the Dajjal would be Jewish, but it was said he would be followed by 70,000 Jews of Isfahan in Iran wearing Persian shawls.

According to the longest of the accounts of the Dajjal in the Hadith, called Sahih Muslim (c.850), he would appear somewhere between Syria and Iraq and spread trouble in all directions. He would stay on the earth for one year and ten weeks.

An old manuscript with Arabic writing.
The Hadith literature is the later collections of the sayings and deeds of Muhammad. This copy was published in Saudi Arabia in the16th century.
Wikimedia Commons

For those who accepted him, there would be bountiful food. For those who rejected him, there would be drought and poverty. He would walk through the wasteland and say “bring forth your treasures” and they would appear before him like a swarm of bees. He would then call a young man, strike him with a sword and cut him in pieces.

Then, God would send Jesus Christ. He would descend with his hands resting on the shoulders of two angels at the white minaret on the Eastern side of Damascus. Every non-believer would perish at his breath. He would search for the Dajjal, capture him at the gate of the city of Ludd (Lydda) in Israel and kill him.

Armilus, the Jewish Antichrist

Like al-Dajjal, you would recognise Armilus instantly. According to the medieval Prayer of Rabbi Shimon ben Yohai, he was born in Rome, the child of Satan and a stone in the shape of a beautiful girl.

He was more monstrous in appearance than either the Muslim or the Christian Antichrists. He was a giant, 5.5 metres tall. In several sources, he was reported as having two skulls.

Two men stand on a hill.
Zerubbabel, depicted in this etching from c.1850, received biblical visions of the apocalypse.
Rijksmuseum

One mid-eighth century tradition reported his hair was dyed, another that it was red, and another that his face was hairy and his forehead leprous. Several reports had him as bald. His eyes were variously malformed – small, deep, red and crooked, one eye small and the other big.

According to the earliest Jewish account of Armilus in Sefer Zerubbabel (or the Apocalypse of Zerubbabel), from between the seventh and ninth centuries, his hands hung down to his green feet. Another text had his right arm only as long as a hand and his left one metre long.

Like the Christian and Muslim Antichrists, he too would come in the end times. Sefer Zerubbabel tells us all those who see him will be terrified. But the Messiah will come “and will blow into his face and kill him”. The Messiah will then gather the Jews in Israel and usher in the Messianic age.

The Antichrist now?

The idea of the Antichrist in Judaism, Christianity and Islam has played a significant role in the histories of these three religions, each asserting its belief in the final victory of good over evil.

The image of the Antichrist remains a powerful one. It speaks to the continuing belief among both believers and non-believers that the course of human history is still to be understood in terms of a world-wide struggle between those on the side of God and the rest on the side of evil.

This division of the world into the good and the evil, patriots and terrorists, angels and demons, whether within or between countries, is one that can never bring any peace to the earth. Best if Thiel – and the rest of us – consign it to history.

The Conversation

Philip C. Almond does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Peter Thiel thinks Greta Thunberg could be the Antichrist. Here’s how three religions actually describe him – https://theconversation.com/peter-thiel-thinks-greta-thunberg-could-be-the-antichrist-heres-how-three-religions-actually-describe-him-267439

Worried about turning 60? Science says that’s when many of us actually peak

Source: The Conversation – Global Perspectives – By Gilles E. Gignac, Associate Professor of Psychology, The University of Western Australia

As your youth fades further into the past, you may start to fear growing older.

But research my colleague and I have recently published in the journal Intelligence shows there’s also very good reason to be excited: for many of us, overall psychological functioning actually peaks between ages 55 and 60.

And knowing this highlights why people in this age range may be at their best for complex problem-solving and leadership in the workforce.

Different types of peaks

There’s plenty of research showing humans reach their physical peak in their mid-twenties to early thirties.

A large body of research also shows that people’s raw intellectual abilities – that is, their capacity to reason, remember and process information quickly – typically starts to decline from the mid-twenties onwards.

This pattern is reflected in the real world. Athletes tend to reach their career peak before 30. Mathematicians often make their most significant contributions by their mid-thirties. Chess champions are rarely at the top of their game after 40.

Yet when we look beyond raw processing power, a different picture emerges.

From reasoning to emotional stability

In our study, we focused on well-established psychological traits beyond reasoning ability that can be measured accurately, represent enduring characteristics rather than temporary states, have well-documented age trajectories, and are known to predict real-world performance.

Our search identified 16 psychological dimensions that met these criteria.

These included core cognitive abilities such as reasoning, memory span, processing speed, knowledge and emotional intelligence. They also included the so-called “big five” personality traits – extraversion, emotional stability, conscientiousness, openness to experience, and agreeableness.

We compiled existing large-scale studies examining the 16 dimensions we identified. By standardising these studies to a common scale, we were able to make direct comparisons and map how each trait evolves across the lifespan.

Peaking later in life

Several of the traits we measured reach their peak much later in life. For example, conscientiousness peaked around age 65. Emotional stability peaked around age 75.

Less commonly discussed dimensions, such as moral reasoning, also appear to peak in older adulthood. And the capacity to resist cognitive biases – mental shortcuts that can lead us to make irrational or less accurate decisions – may continue improving well into the 70s and even 80s.

When we combined the age-related trajectories of all 16 dimensions into a theoretically and empirically informed weighted index, a striking pattern emerged.

Overall mental functioning peaked between ages 55 and 60, before beginning to decline from around 65. That decline became more pronounced after age 75, suggesting that later-life reductions in functioning can accelerate once they begin.

Getting rid of age-based assumptions

Our findings may help explain why many of the most demanding leadership roles in business, politics, and public life are often held by people in their fifties and early sixties. So while several abilities decline with age, they’re balanced by growth in other important traits. Combined, these strengths support better judgement and more measured decision-making – qualities that are crucial at the top.

Despite our findings, older workers face greater challenges re-entering the workforce after job losses. To some degree, structural factors may shape hiring decisions. For example, employers may see hiring someone in their mid-fifties as a short-term investment if retirement at 60 is likely.

In other cases, some roles have mandatory retirement ages. For example, International Civil Aviation Organisation sets a global retirement age of 65 for international airline pilots. Many countries also require air traffic controllers to retire between 56 and 60. Because these jobs demand high levels of memory and attention, such age limits are often considered justifiable.

However, people’s experiences vary.

Research has found that while some adults show declines in reasoning speed and memory, others also maintain these abilities well into later life.

Age alone, then, doesn’t determine overall cognitive functioning. So evaluations and assessments should focus on individuals’ actual abilities and traits rather than age-based assumptions.

A peak, not a countdown

Taken together, these findings highlight the need for more age-inclusive hiring and retention practices, recognising that many people bring valuable strengths to their work in midlife.

Charles Darwin published On the Origin of Species at 50. Ludwig van Beethoven, at 53 and profoundly deaf, premiered his Ninth Symphony. In more recent times, Lisa Su, now 55, led computer company Advanced Micro Devices through one of the most dramatic technical turnarounds in the industry.

History is full of people who reached their greatest breakthroughs well past what society often labels as “peak age”. Perhaps it’s time we stopped treating midlife as a countdown and started recognising it as a peak.

The Conversation

Gilles E. Gignac does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Worried about turning 60? Science says that’s when many of us actually peak – https://theconversation.com/worried-about-turning-60-science-says-thats-when-many-of-us-actually-peak-267215

Your body can be a portable gym: how to ditch membership fees and expensive equipment

Source: The Conversation – Global Perspectives – By Dan van den Hoek, Senior Lecturer, Clinical Exercise Physiology, University of the Sunshine Coast

monika kabise JeCVBSpS xU unsplash Monika Kabise/Unsplash

You don’t need a gym membership, dumbbells, or expensive equipment to get stronger.

Since the beginning of time, we’ve had access to the one piece of equipment that is essential for strength training – our own bodies.

Strength training without the use of external forces and equipment is called “bodyweight training”.

From push-ups and squats to planks and chin-ups, bodyweight training has become one of the most popular ways to exercise because it can be done anywhere – and it’s free.

So, what is it, why does it work and how do you get started?

A man attempts a chin-up on a metal bar in a park outside

Lawrence Crayton/Unsplash

What is bodyweight training?

Bodyweight training simply means you use your own body weight as resistance, instead of external weights such as barbells and dumbbells.

Common exercises include push-ups, squats, lunges and sit-ups.

But bodyweight training can also use static holds that challenge your body without moving, like planks or yoga poses.

Bodyweight training can be used for any muscle group. Typically, we can break down the exercises by movement type and/or body region:

  • upper body: push-ups, pull-ups, handstands
  • lower body: squats, lunges, step-ups, glute bridges
  • core: sit-ups, planks, mountain climbers
  • whole body: burpees, bear crawls, jump squats.

Bodyweight training can also be done with equipment: calisthenics is a style of bodyweight training that uses bars, rings and outdoor gyms.

What are the main forms?

Types of bodyweight training include:

  • calisthenics: often circuit-based (one exercise after another with minimal rest), dynamic and whole-body focused. Calisthenics is safe and effective for improving functional strength, power and speed, especially for older adults
  • yoga: more static or flowing poses with an emphasis on flexibility and balance. Yoga is typically safe and effective for managing and preventing musculoskeletal injuries and supporting mental health
  • Tai Chi: slower, more controlled movements, often with an emphasis on balance, posture and mindful movement
  • suspension training: using straps or rings so your body can be supported in different positions while using gravity and your own bodyweight for resistance. This type or training is suitable for older adults through to competitive athletes
  • resistance bands: although not strictly bodyweight only, resistance bands are a portable, low-cost alternative to traditional weights. They are safe and effective for improving strength, balance, speed and physical function.

What are the pros and cons?

There are various pros and cons to bodyweight exercises.

Pros:

  • builds strength: a 2025 meta-analysis of 102 studies in 4,754 older adults (aged 70 on average) found bodyweight training led to substantial strength gains – which were no different from those with free weights or machines. These benefits aren’t just for older adults, though. Using resistance bands with your bodyweight workout can be as effective as traditional training methods across diverse populations
  • boosts aerobic fitness: a 2021 study showed as little as 11 minutes of bodyweight exercises three times per week was effective for improving aerobic fitness
  • accessible and free: bodyweight training avoids common barriers to exercise such as access to equipment and facilities, which means it can be done anywhere, without a gym membership
  • promotes functional movement: exercises like squats and push-ups mimic everyday actions like rising from a chair or getting up from the floor.

Cons:

  • difficulty progressing over time: typically, we can add weight to an exercise to increase difficulty. For bodyweight training, you need to be creative, such as slowing your tempo or progressing to unilateral (one-sided or single-limb) movements
  • plateau risk: heavy external loads are more effective than bodyweight training for increasing maximal strength. This means if you stick to bodyweight training alone, your strength gains are more likely to plateau than if you use machines or free weights.

Tips for getting started (safely)

As with any form of exercise, it’s always best to speak to a medical professional before starting.

If you are ready to get going, here’s some tips:

  • start small: pick simple moves to begin and progress them as you gain strength, confidence and experience
  • focus on form: think quality over quantity. Completing movements with good control and body position is more important than how many you can do with poor control
  • progress gradually: vary the number of sets or repetitions to make your exercise more challenging. You can progress the movements from easier (push-ups on your knees) to harder (decline push-ups) as you get stronger and need more of a challenge
  • mix it up: use a variety of types of bodyweight training as well as targeting different muscle groups and movements
  • seek guidance: reach out to your local exercise professionals or use apps like the Nike Training Club to help guide your planning and progress.

Bodyweight training means you don’t need expensive equipment to improve your health. Whether it’s squats in the park, push-ups at your children’s football game, or yoga at home, your body is a portable gym.

With consistency, creativity and time, bodyweight exercises can help you build strength and fitness.

The Conversation

Dan van den Hoek received research funding from Aus Active (2024) and is a member of Exercise and Sports Science Australia.

Jackson Fyfe does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Your body can be a portable gym: how to ditch membership fees and expensive equipment – https://theconversation.com/your-body-can-be-a-portable-gym-how-to-ditch-membership-fees-and-expensive-equipment-264036

AI systems and humans ‘see’ the world differently – and that’s why AI images look so garish

Source: The Conversation – Global Perspectives – By T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT University

Andres Aleman/Unsplash

How do computers see the world? It’s not quite the same way humans do.

Recent advances in generative artificial intelligence (AI) make it possible to do more things with computer image processing. You might ask an AI tool to describe an image, for example, or to create an image from a description you provide.

As generative AI tools and services become more embedded in day-to-day life, knowing more about how computer vision compares to human vision is becoming essential.

My latest research, published in Visual Communication, uses AI-generated descriptions and images to get a sense of how AI models “see” – and discovered a bright, sensational world of generic images quite different from the human visual realm.

This image features a pixelated selfie featuring an individual with long brown hair and a fringe. The person has their tongue out and is smiling too. Most of the parts of the image are pixelated with red and yellow squares focusing on certain parts of the
Algorithms see in a very different way to humans.
Elise Racine / Better Images of AI / Emotion: Joy, CC BY

Comparing human and computer vision

Humans see when light waves enter our eyes through the iris, cornea and lens. Light is converted into electrical signals by a light-sensitive surface called the retina inside the eyeball, and then our brains interpret these signals into images we see.

Our vision focuses on key aspects such as colour, shape, movement and depth. Our eyes let us detect changes in the environment and identify potential threats and hazards.

Computers work very differently. They process images by standardising them, inferring the context of an image through metadata (such as time and location information in an image file), and comparing images to other images they have previously learned about. Computers focus on things such as edges, corners or textures present in the image. They also look for patterns and try to classify objects.

A screenshot of a CAPTCHA test asking a user to select all images with a bus.
Solving CAPTCHAs helps prove you’re human and also helps computers learn how to ‘see’.
CAPTCHA

You’ve likely helped computers learn how to “see” by completing online CAPTCHA tests.

These are typically used to help computers differentiate between humans and bots. But they’re also used to train and improve machine learning algorithms.

So, when you’re asked to “select all the images with a bus”, you’re helping software learn the difference between different types of vehicles as well as proving you’re human.

Exploring how computers ‘see’ differently

In my new research, I asked a large language model to describe two visually distinct sets of human-created images.

One set contained hand-drawn illustrations while the other was made up of camera-produced photographs.

I fed the descriptions back into an AI tool and asked it to visualise what it had described. I then compared the original human-made images to the computer-generated ones.

The resulting descriptions noted the hand-drawn images were illustrations but didn’t mention the other images as being photographs or having a high level of realism. This suggests AI tools see photorealism as the default visual style, unless specifically prompted otherwise.

Cultural context was largely devoid from the descriptions. The AI tool either couldn’t or wouldn’t infer cultural context by the presence of, for example, Arabic or Hebrew writing in the images. This underscores the dominance of some languages, like English, in AI tools’ training data.

While colour is vital to human vision, it too was largely ignored in the AI tools’ image descriptions. Visual depth and perspective were also largely ignored.

The AI images were more boxy than the hand-drawn illustrations, which used more organic shapes.

Two similar but different black and white illustrations of a bookshelf on wheels.
The AI-generated images were much more boxy than the hand-drawn illustrations, which used more organic shapes and had a different relationship between positive and negative space.
Left: Medar de la Cruz; right: ChatGPT

The AI images were also much more saturated than the source images: they contained brighter, more vivid colours. This reveals the prevalence of stock photos, which tend to be more “contrasty”, in AI tools’ training data.

The AI images were also more sensationalist. A single car in the original image became one of a long column of cars in the AI version. AI seems to exaggerate details not just in text but also in visual form.

A photo of people with guns driving through a desert and a generated photorealistic image of several cars containing peopl with guns driving through a desert.
The AI-generated images were more sensationalist and contrasty than the human-created photographs.
Left: Ahmed Zakot; right: ChatGPT

The generic nature of the AI images means they can be used in many contexts and across countries. But the lack of specificity also means audiences might perceive them as less authentic and engaging.

Deciding when to use human or computer vision

This research supports the notion that humans and computers “see” differently. Knowing when to rely on computer or human vision to describe or create images can be a competitive advantage.

While AI-generated images can be eye-catching, they can also come across as hollow upon closer inspection. This can limit their value.

Images are adept at sparking an emotional reaction and audiences might find human-created images that authentically reflect specific conditions as more engaging than computer-generated attempts.

However, the capabilities of AI can make it an attractive option for quickly labelling large data sets and helping humans categorise them.

Ultimately, there’s a role for both human and AI vision. Knowing more about the opportunities and limits of each can help keep you safer, more productive, and better equipped to communicate in the digital age.

The Conversation

T.J. Thomson receives funding from the Australian Research Council. He is an affiliate with the ARC Centre of Excellence for Automated Decision Making & Society.

ref. AI systems and humans ‘see’ the world differently – and that’s why AI images look so garish – https://theconversation.com/ai-systems-and-humans-see-the-world-differently-and-thats-why-ai-images-look-so-garish-260178

The 2025 Nobel economics prize honours economic creation and destruction

Source: The Conversation – Global Perspectives – By John Hawkins, Head, Canberra School of Government, University of Canberra

Economists Joel Mokyr, Philippe Aghion, and Peter Howitt. Ill. Niklas Elmehed © Nobel Prize Outreach

Three economists working in the area of “innovation-driven economic growth” have won this year’s Nobel Memorial Prize in Economic Sciences.

Half of the 11 million Swedish kronor (about A$1.8 million) prize was awarded to Joel Mokyr, a Dutch-born economic historian at Northwestern University.

The other half was jointly awarded to Philippe Aghion, a French economist at Collège de France and INSEAD, and Peter Howitt, a Canadian economist at Brown University.

Collectively, the trio’s work has examined the importance of innovation in driving sustainable economic growth. It has also highlighted that in dynamic economies, old firms die as new firms are being born.

Innovation drives sustainable growth

As noted by the Royal Swedish Academy of Sciences, economic growth has lifted billions of people out of poverty over the past two centuries. While we take this as normal, it is actually very unusual in the broad sweep of history.

The period since around 1800 is the first in human history when there has been sustained economic growth. This warns us we should not be complacent. Poor policy could see economies stagnate again.

One of the Nobel judges gave the example that in Sweden and the United Kingdom there was little improvement in living standards in the four centuries between 1300 and 1700.

Mokyr’s work showed that prior to the Industrial Revolution, innovations were more a matter of trial and error than being based on scientific understanding. He has argued that sustained economic growth would not emerge in:

a world of engineering without mechanics, iron-making without metallurgy, farming without soil science, mining without geology, water-power without hydraulics, dyemaking without organic chemistry, and medical practice without microbiology and immunology.

Mokyr gives the example of sterilising surgical instruments. This had been advocated in the 1840s or earlier. But surgeons were offended by the suggestion they might be transmitting diseases. It was only after the work of Louis Pasteur and Joseph Lister in the 1860s that the role of germs was understood and sterilisation became common.

Mokyr emphasised the importance of society being open to new ideas. As the Nobel committee put it:

practitioners, ready to engage with science, along with a societal climate embracing change, were, according to Mokyr, key reasons why the Industrial Revolution started in Britain.

Winners and losers

This year’s other two laureates, Aghion and Howitt, recognised that innovations create both winning and losing firms. In the US, about 10% of firms enter and 10% leave the market each year. Promoting economic growth requires an understanding of both processes.

Their 1992 article built on earlier work on the concept of “endogenous growth” – the idea that economic growth is
generated by factors inside an economic system, not the result of forces that impinge from outside. This earned a Nobel prize for Paul Romer in 2018.

It also drew on earlier work on “creative destruction” by Joseph Schumpeter.

The model created by Aghion and Howitt implies governments need to be careful how they design subsidies to encourage innovation.

If companies think that any innovation they invest in is just going to be overtaken (meaning they would lose their advantage), they won’t invest as much in innovation.

Their work also supports the idea governments have a role in supporting and retraining those workers who lose their jobs in firms that are displaced by more innovative competitors.

This will build political support for policies that encourage economic growth, as well.

‘Dark clouds’ on the horizon?

The three laureates all favour economic growth, in contrast to growing concerns about the impact of endless growth on the planet.

In an interview after the announcement, however, Aghion called for carbon pricing to make economic growth consistent with reducing greenhouse gas emissions.

He also warned about the gathering “dark clouds” of tariffs; that creating barriers to trade could reduce economic growth.

And he said we need to ensure today’s innovators do not stifle future innovators through anti-competitive practices.

The newest Nobel prize

The economics prize was not one of the five originally nominated in Swedish chemist Alfred Nobel’s will in 1895. It is formally called the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel. It was first awarded in 1969.

The awards to Mokyr and Howitt continue the pattern of the economics prize being dominated by researchers working at US universities.

It also continues the pattern of over-representation of men. Only three of the 99 economics laureates have been women.

Arguably, economics professor Rachel Griffith, rather than Mokyr, could have shared the prize with Aghion and Howitt this year. She co-authored the book Competition and Growth with Aghion, and co-wrote an article on competition with both of them.

The Conversation

John Hawkins does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. The 2025 Nobel economics prize honours economic creation and destruction – https://theconversation.com/the-2025-nobel-economics-prize-honours-economic-creation-and-destruction-267212

How we sharpened the James Webb telescope’s vision from a million kilometres away

Source: The Conversation – Global Perspectives – By Benjamin Pope, Associate Professor, School of Mathematical and Physical Sciences, Macquarie University

A ‘selfie’ taken during Webb’s testing on Earth. Ball Aerospace

After Christmas dinner in 2021, our family was glued to the television, watching the nail-biting launch of NASA’s US$10 billion (AU$15 billion) James Webb Space Telescope. There had not been such a leap forward in telescope technology since Hubble was launched in 1990.

En route to its deployment, Webb had to successfully navigate 344 potential points of failure. Thankfully, the launch went better than expected, and we could finally breathe again.

Six months later, Webb’s first images were revealed, of the most distant galaxies yet seen. However, for our team in Australia, the work was only beginning.

We would be using Webb’s highest-resolution mode, called the aperture masking interferometer or AMI for short. It’s a tiny piece of precisely machined metal that slots into one of the telescope’s cameras, enhancing its resolution.

Our results on painstakingly testing and enhancing AMI are now released on the open-access archive arXiv in a pair of papers. We can finally present its first successful observations of stars, planets, moons and even black hole jets.

Working with an instrument a million kilometres away

Hubble started its life seeing out of focus – its mirror had been ground precisely, but incorrectly. By looking at known stars and comparing the ideal and measured images (exactly like what optometrists do), it was possible to figure out a “prescription” for this optical error and design a lens to compensate.

The correction required seven astronauts to fly up on the Space Shuttle Endeavour in 1993 to install the new optics. Hubble orbits Earth just a few hundred kilometres above the surface, and can be reached by astronauts.

A moody image of the honeycomb-like mirror layout still in a lab with people in protective gear inspecting it.
The primary mirror of the Webb telescope consists of 18 precisely ground hexagonal segments.
NASA/Chris Gunn

By contrast, Webb is roughly 1.5 million kilometres away – we can’t visit and service it, and need to be able to fix issues without changing any hardware.

This is where AMI comes in. This is the only Australian hardware on board, designed by astronomer Peter Tuthill.

It was put on Webb to diagnose and measure any blur in its images. Even nanometres of distortion in Webb’s 18 hexagonal primary mirrors and many internal surfaces will blur the images enough to hinder the study of planets or black holes, where sensitivity and resolution are key.

AMI filters the light with a carefully structured pattern of holes in a simple metal plate, to make it much easier to tell if there are any optical misalignments.

A metal plate with a hexagonal pattern on it, and several hexagon shaped holes.
AMI allows for a precise test pattern that can help correct any issues with JWST’s focus.
Anand Sivaramakrishnan/STScI

Hunting blurry pixels

We wanted to use this mode to observe the birth places of planets, as well as material being sucked into black holes. But before any of this, AMI showed Webb wasn’t working entirely as hoped.

At very fine resolution – at the level of individual pixels – all the images were slightly blurry due to an electronic effect: brighter pixels leaking into their darker neighbours.

This is not a mistake or flaw, but a fundamental feature of infrared cameras that turned out to be unexpectedly serious for Webb.

This was a dealbreaker for seeing distant planets many thousands of times fainter than their stars a few pixels away: my colleagues quickly showed that its limits were more than ten times worse than hoped.

So, we set out to correct it.

How we sharpened Webb’s vision

In a new paper led by University of Sydney PhD student Louis Desdoigts, we looked at stars with AMI to learn and correct the optical and electronic distortions simultaneously.

We built a computer model to simulate AMI’s optical physics, with flexibility about the shapes of the mirrors and apertures and about the colours of the stars.

We connected this to a machine learning model to represent the electronics with an “effective detector model” – where we only care about how well it can reproduce the data, not about why.

After training and validation on some test stars, this setup allowed us to calculate and undo the blur in other data, restoring AMI to full function. It doesn’t change what Webb does in space, but rather corrects the data during processing.

It worked beautifully – the star HD 206893 hosts a faint planet and the reddest-known brown dwarf (an object between a star and a planet). They were known but out of reach with Webb before applying this correction. Now, both little dots popped out clearly in our new maps of the system.

A dark circle on a grey background showing two spots of light labelled B and C.
A map of the HD 206893 system. The colourful spots show the likelihood of there being an object at that position, while B and C show the known positions of the companion planets. The wider blob means the position of C is less precisely measured, as it’s much fainter than B. This is simplified from the full version presented in the paper.
Desdoigts et al., 2025

This correction has opened the door to using AMI to prospect for unknown planets at previously impossible resolutions and sensitivities.

It works not just on dots

In a companion paper by University of Sydney PhD student Max Charles, we applied this to looking not just at dots – even if these dots are planets – but forming complex images at the highest resolution made with Webb. We revisited well-studied targets that push the limits of the telescope, testing its performance.

A red sphere with four brighter spots clearly visible.
Jupiter’s moon Io, seen by AMI on Webb. Four bright spots are visible; they are volcanoes, exactly where expected, and rotate with Io over the hour-long timelapse.
Max Charles

With the new correction, we brought Jupiter’s moon Io into focus, clearly tracking its volcanoes as it rotates over an hour-long timelapse.

As seen by AMI, the jet launched from the black hole at the centre of the galaxy NGC 1068 closely matched images from much-larger telescopes.

Finally, AMI can sharply resolve a ribbon of dust around a pair of stars called WR 137, a faint cousin of the spectacular Apep system, lining up with theory.

The code built for AMI is a demo for much more complex cameras on Webb and its follow-up, Roman space telescope. These tools demand an optical calibration so fine, it’s just a fraction of a nanometre – beyond the capacity of any known materials.

Our work shows that if we can measure, control, and correct the materials we do have to work with, we can still hope to find Earth-like planets in the far reaches of our galaxy.

The Conversation

Benjamin Pope receives funding from the Australian Research Council and the Big Questions Institute.

ref. How we sharpened the James Webb telescope’s vision from a million kilometres away – https://theconversation.com/how-we-sharpened-the-james-webb-telescopes-vision-from-a-million-kilometres-away-262510

Trump’s ‘shock and awe’ foreign policy achieved a breakthrough in Gaza – but is it sustainable?

Source: The Conversation – Global Perspectives – By Lester Munson, Non-Resident Fellow, United States Studies Centre, University of Sydney

US President Donald Trump will visit Israel and Egypt this week to oversee the initial implementation of his Gaza peace agreement, which many hope will permanently end the two-year war in the strip.

Should the peace hold, the Gaza accord will be Trump’s greatest foreign policy achievement, even surpassing the Abraham Accords of his first term that normalised relations between Israel and several Arab countries.

Given the speed with which the Trump administration has helped to negotiate the ceasefire, it is an opportune moment to assess Trump’s frenetic foreign policy at the start of his second presidential term.

The “Trump Doctrine” – the unconventional, high-energy and fast-moving approach to world affairs now pursued by the United States – has had some significant achievements, most notably in Gaza. But are these breakthroughs sustainable, and can his foreign policy approach be effective with larger geostrategic challenges?

A leaner decision-making structure

One way the Trump administration’s approach is different from previous administrations – including Trump 1.0 – is in his leaner organisation, which is more capable of implementing quick action.

Trump has revamped the national security decision-making structure in surprising ways. His secretary of state, Marco Rubio, now serves concurrently as his national security adviser. Rubio has also reduced the staff of the National Security Council from around 350 to about 150, which is still larger than many of Trump’s predecessors before Barack Obama.

There have been some missteps. Trump’s first national security adviser, Michael Waltz, tried to accommodate his need for speedy decision-making by establishing group chats on the Signal app for the small group of agency heads and senior advisers who advise Trump. This rightfully caused concerns about the security of classified information – especially after Waltz mistakenly added a journalist to a chat group – and he was subsequently ousted.

With a much smaller staff now, Rubio is implementing a more sustainable method for the president to communicate with his top advisers, mostly through Rubio himself and Trump’s powerful chief of staff, Susie Wiles.

Rubio has also led a top-down revamp of the bureaucratic foreign policy structures. Dozens of offices were eliminated, and hundreds of career professionals were laid off. Numerous political appointments, including ambassadorships, remain unfilled.

Many bureaus are now headed not by Senate-confirmed assistant secretaries, but by career foreign and civil service “senior bureau officials”. This keeps the number of politically appointed policymakers rather small – mostly in Rubio’s direct orbit – while keeping professional “implementers” in key positions to execute policy.

A reliance on special envoys

To set the stage for his own deal-making, Trump also uses his longtime friend and multipurpose envoy, Steve Witkoff, for the highest-level conversations. Without any Senate confirmation, Witkoff has become Trump’s most trusted voice in Ukraine, Gaza and several other foreign policy negotiations.

Massad Boulos, another unconfirmed Trump envoy, conducts second-tier negotiations, mostly in Africa but also parts of the Middle East.

Trump’s son-in-law, Jared Kushner, played a key role in the recent Gaza accord as well. This has raised questions of conflicts of interest. However, Trump’s emphasis on deal-oriented businessmen in diplomatic roles is intentional.

The approach appears to be very welcome in some quarters, particularly in the Middle East, where conventional diplomacy was fraught with much historical baggage.

A ‘shock and awe’ approach

On top of all this, of course, is Trump’s style and showmanship.

His most controversial statements – for example, demanding US ownership of Greenland – may seem absurd and offensive at first. However, there are genuine national security concerns over China’s role in the Arctic and the possibility an independent Greenland might serve as a wedge in a critical region. From this standpoint, establishing some US control over Greenland’s foreign policy is an entirely rational proposition.

What is unique to Trump is the pace, breadth and intensity of his personal diplomacy.

Trump’s relationship with Israeli Prime Minister Benjamin Netanyahu is a case in point. While Trump embraces Netanyahu in public and green-lights all of Israel’s military actions, he’s willing to say no to the Israeli leader in private. For example, Trump intervened to prevent Israel from annexing the West Bank immediately before the Gaza breakthrough.

In addition, Trump’s personal charm offensive with Arab leaders in the region – his first major foreign trip after Pope Francis’ funeral was to Qatar, Saudi Arabia and the United Arab Emirates – established a coalition to pressure Hamas to say yes to the deal.

It is a “shock and awe” diplomatic approach: everything, everywhere, all at once. Previous agreements and norms (including those set by Trump himself) are downplayed or discarded in favour of action in the moment.

Is there a longer-term vision?

Of course, there are downsides to the Trump approach. The past cannot be ignored, especially in the Middle East. And many previous agreements and norms were there for a reason – they worked, and they helped stabilise otherwise chaotic situations.
It very much remains to be seen whether Trump’s approach can lead to a long-term solution in Gaza. Many critics have pointed out the vagueness in his 20-point peace plan, which could cause it to fall apart at any moment.

It is not unusual for a second-term American president like Trump to focus on foreign policy, where Congress has a highly limited role and the president has wide latitude. But American presidents usually focus on achieving one big thing. Think Obama’s nuclear deal with Iran or George W. Bush’s troop surge in Iraq.

Today, in addition to the Gaza accord, Trump is pursuing separate diplomatic deals with all four major American adversaries: China, Russia, Iran and North Korea.

The logic of this is to put direct stress on the alliance of bad actors. Does Chinese leader Xi Jinping trust Russian President Vladimir Putin enough to resist Trump’s entreaties, and vice versa? How much are Russia and China worried about North Korean leader Kim Jong Un cutting a deal with Washington?

The true test of the Trump Doctrine will not be the success of the Gaza accord, but whether he can build on it to drive the West’s adversaries – mainly China and Russia – apart from each other and into weaker strategic positions.

The Conversation

Lester Munson receives funding from the US Studies Centre at the University of Sydney. He is affiliated with BGR Group, a Washington DC governmental affairs firm and was previously Republican staff in the US Congress and in the George W. Bush administration.

ref. Trump’s ‘shock and awe’ foreign policy achieved a breakthrough in Gaza – but is it sustainable? – https://theconversation.com/trumps-shock-and-awe-foreign-policy-achieved-a-breakthrough-in-gaza-but-is-it-sustainable-267316

Israelis are hailing Trump as Cyrus returned – but who was Cyrus the Great, anyway?

Source: The Conversation – Global Perspectives – By Peter Edwell, Associate Professor in Ancient History, Macquarie University

With both parties agreeing to terms, the first stages of a peace plan in Gaza are in motion. US President Donald Trump is credited (especially in Israel and the US) with having played a vital role in this development.

But why have banners appeared in Israel depicting Trump with the caption “Cyrus the Great is alive”?

Who was Cyrus and what is he renowned for?

Founder of the Achaemenid Persian empire

Cyrus the Great was the founder of the Achaemenid Persian empire (550 BCE to 330 BCE).

Under Cyrus and his successors, the Persian empire stretched across a vast array of territories, including Iran, Mesopotamia (which includes parts of modern-day Turkey, Syria and Iraq), Egypt, Asia Minor (which is mostly modern-day Turkey) and Central Asia.

A key moment in this imperial expansion was Cyrus’ capture of Babylon and its surrounding territory, Babylonia, (mostly in modern-day Iraq) in 539 BCE.

The Babylonian king, Nabonidus, controlled large sections of Mesopotamia and northern Arabia. A surviving clay tablet called the Nabonidus chronicle outlines the alienation of his subjects. Unpopular religious reforms and his long absences from Babylon were among the grievances.

Cuneiform tablet with part of the Nabonidus Chronicle (556-530s BC)
A clay tablet called the Nabonidus chronicle describes Nabonidus’ despotic tendencies.
© The Trustees of the British Museum, CC BY-NC-SA

Soon after he defeated Nabonidus, Cyrus issued a decree freeing captive Jews (and others) in Babylon.

A comparatively humane approach to governing

Nebuchadnezzar II, king of the Babylonian empire from 605–562 BCE, had captured the kingdom of Judah (in modern-day Israel and Palestinian territories) in 587 BCE.

Due to rebellions, he ransacked Jerusalem and deported thousands of Jews to Babylon.

When Cyrus freed the Babylonian Jewish exiles almost 50 years later, many returned to Judah.

The biblical book of Ezra records the decree.

Cyrus, according to this version of the story, had been commanded by God to rebuild a temple at Jerusalem that Nebuchadnezzar II had destroyed. The decree released the Jewish exiles from Babylon to return to Jerusalem and rebuild it.

In the Old Testament book of Isaiah, Cyrus was chosen by God to free the Jews of Babylon.

For this reason, Cyrus became (and remains) a legendary figure in Jewish history, though he was not Jewish himself. He was more likely a devotee of Zoroastrianism, which was fervently embraced by his successors, including Darius I (who ruled 522-486 BCE).

An ancient clay tablet from Babylon suggests Cyrus’ occupation of Babylon was peaceful. It confirms the return of exiles, but not specifically Jewish ones. Known today as the “Cyrus cylinder”, it is sometimes referred to as an ancient declaration of human rights. A replica of the tablet is on permanent display at the UN headquarters in New York.

Cyrus was remembered in antiquity for what, at the time, was a comparatively humane approach to governing.

The Greek writer Xenophon, who wrote the Cyropedia (The Education of Cyrus) in about 370 BCE, noted that:

subjects he cared for and cherished as a father might care for his children, and they who came beneath his rule reverenced him like a father.

The benevolent and altruistic reputation of Cyrus was developed in his own reign and later. As one of history’s “winners”, Cyrus would be well-pleased with the propaganda that has continued to develop about his reign.

Conquest and wealth

Cyrus was, of course, a great warrior and strategist. One of his most famous conquests was the kingdom of Lydia (modern southwest Turkey) in about 546 BCE. Its king, Croesus, was known for his incredible wealth.

Cyrus initially ordered Croesus to be burned alive. But when the god Apollo sent a rain storm, Croesus was spared, according to the 5th century BCE Greek historian Herodotus. He then became a trusted advisor of Cyrus, adding to the Persian king’s reputation for benevolence.

Cyrus was also known for large-scale construction projects. The most famous was the palace complex at his capital, Pasargadae (modern southern Iran).

The palace and other buildings were set in the midst of magnificent paradise gardens.

Today, the most intact building at Pasargadae is the tomb of Cyrus. It has become a powerful symbol of Iranian and Persian nationalism. The legacy of Cyrus is still significant in Iran today.

So, the banners comparing Trump to Cyrus appear to be drawing on the story of Cyrus’ role in freeing Jewish captives. In this framing, Gaza is cast as Babylon and Trump as the new Cyrus.

One wonders what Cyrus the Great would think of the comparison.

The Conversation

Peter Edwell receives funding from the Australian Research Council.

ref. Israelis are hailing Trump as Cyrus returned – but who was Cyrus the Great, anyway? – https://theconversation.com/israelis-are-hailing-trump-as-cyrus-returned-but-who-was-cyrus-the-great-anyway-267312