Our quest to find a truly Earth-like planet in deep space

Source: The Conversation – Global Perspectives – By Christopher Watson, Professor, Astrophysics Research Centre, School of Mathematics and Physics, Queen’s University Belfast

Nasa animation depicting the first 5,000 exoplanets to have been discovered, up to March 2022. M. Russo and A. Santaguida/Nasa-JPL

On October 6 1995, at a scientific meeting in Florence, Italy, two Swiss astronomers made an announcement that would transform our understanding of the universe beyond our solar system. Michel Mayor and his PhD student Didier Queloz, working at the University of Geneva, announced they had detected a planet orbiting a star other than the Sun.

The star in question, 51 Pegasi, lies about 50 light years away in the constellation Pegasus. Its companion – christened 51 Pegasi b – was unlike anything written in textbooks about how we thought planets might look. This was a gas giant with a mass of at least half that of Jupiter, circling its star in just over four days. It was so close to the star (1/20th of Earth’s distance from the Sun, well inside Mercury’s orbit) that the planet’s atmosphere would be like a furnace, with temperatures topping 1,000°C.

The instrument behind the discovery was Elodie, a spectrograph that had been installed two years earlier at the Haute-Provence observatory in southern France. Designed by a Franco-Swiss team, Elodie split starlight into a spectrum of different colours, revealing a rainbow etched with fine dark lines. These lines can be thought of as a “stellar barcode”, providing details on the chemistry of other stars.

What Mayor and Queloz spotted was 51 Pegasi’s barcode sliding rhythmically back-and-forth in this spectrum every 4.23 days – a telltale signal that the star was being wobbled back and forth by the gravitational tug of an otherwise unseen companion amid the glare of the star.

After painstakingly ruling out other explanations, the astronomers finally decided that the variations were due to a gas giant in a close-in orbit around this Sun-like star. The front page of the Nature journal in which their paper was published carried the headline: “A planet in Pegasus?”

The discovery baffled scientists, and the question-mark on Nature’s front cover reflected initial scepticism. Here was a purported giant planet next to its star, with no known mechanism for forming a world like this in such a fiery environment.

While the signal was confirmed by other teams within weeks, reservations about the cause of the signal remained for almost three years before being finally ruled out. Not only did 51 Pegasi b become the first planet discovered orbiting a Sun-like star outside our Solar System, but it also represented an entirely new type of planet. The term “hot Jupiter” was later coined to describe such planets.

Diagram showing 51 Pegasi b to be 50% larger than Jupiter, and 51 Pegasi to be 23% larger than the Sun.

NASA/JPL-Caltech

This discovery opened the floodgates. In the 30 years since, more than 6,000 exoplanets (the term for planets outside our Solar System) and exoplanet candidates have been catalogued.

Their variety is staggering. Not only hot but ultra-hot Jupiters with a dayside temperature exceeding 2,000 °C and orbits of less than a day. Worlds that orbit not one but two stars, like Tatooine from Star Wars. Strange “super-puff” gas giants larger than Jupiter but with a fraction of the mass. Chains of small rocky planets all piled up in tight orbits.

The discovery of 51 Pegasi b triggered a revolution and, in 2019, landed Mayor and Queloz a Nobel prize. We can now infer that most stars have planetary systems. And yet, of the thousands of exoplanets found, we have yet to find a planetary system that resembles our own.




Read more:
Nobel Prize in Physics: how the first exoplanet around a sun-like star was discovered


The quest to find an Earth twin – a planet that truly resembles Earth in size, mass and temperature – continues to drive modern-day explorers like us to search for more undiscovered exoplanets. Our expeditions may not take us on death-defying voyages and treks like the past legendary explorers of Earth, but we do get to visit beautiful, mountain-top observatories often located in remote areas around the world.

We are members of an international consortium of planet hunters that built, operate and maintain the Harps-N spectrograph, mounted on the Telescopio Nazionale de Galileo on the beautiful Canary island of La Palma. This sophisticated instrument allows us to rudely interrupt the journey of starlight which may have been travelling unimpeded at speeds of 670 million miles per hour for decades or even millennia.

Each new signal has the potential to bring us closer to understanding how common planetary systems like our own may (or may not) be. In the background lies the possibility that one day, we may finally detect another planet like Earth.

The origins of exoplanet study

Up until the mid-1990s, our Solar System was the only set of planets humanity ever knew. Every theory about how planets formed and evolved stemmed from these nine, incredibly closely spaced data-points (which went down to eight when Pluto was demoted in 2006, after the International Astronomical Union agreed a new definition of a planet).

All of these planets revolve around just one star out of the estimated 10¹¹ (roughly 100 billion) in our galaxy, the Milky Way – which is in turn one of some 10¹¹ galaxies throughout the universe. So, trying to draw conclusions from the planets in our Solar System alone was a bit like aliens trying to understand human nature by studying students living together in one house. But that didn’t stop some of the greatest minds in history speculating on what lay beyond.

The ancient Greek philosopher Epicurus (341-270BC) wrote: “There is an infinite number of worlds – some like this world, others unlike it.” This view was not based on astronomical observation but his atomist theory of philosophy. If the universe was made up of an infinite number of atoms then, he concluded, it was impossible not to have other planets.

Epicurus clearly understood what this meant in terms of the potential for life developing elsewhere:

We must not suppose that the worlds have necessarily one and the same shape. Nobody can prove that in one sort of world there might not be contained – whereas in another sort of world there could not possibly be – the seeds out of which animals and plants arise and all the rest of the things we see.

In contrast, at roughly the same time, fellow Greek philosopher Aristotle (384-322 BC) was proposing his geocentric model of the universe, which had the Earth immobile at its centre with the Moon, Sun and known planets orbiting around us. In essence, the Solar System as Aristotle conceived it was the entire universe. In On the Heavens (350BC), he argued: “It follows that there cannot be more worlds than one.”

Such thinking that planets were rare in the universe persisted for 2,000 years. Sir James Jeans, one of the world’s top mathematicians and an influential physicist and astronomer at the time, advanced his tidal hypothesis of planet formation in 1916. According to this theory, planets were formed when two stars pass so closely that the encounter pulls streams of gas off the stars into space, which later condense into planets. The rareness of such close cosmic encounters in the vast emptiness of space led Jeans to believe that planets must be rare, or – as was reported in his obituary – “that the solar system might even be unique in the universe”.


The Insights section is committed to high-quality longform journalism. Our editors work with academics from many different backgrounds who are tackling a wide range of societal and scientific challenges.


But by then, understanding of the scale of the universe was slowly changing. In the “Great Debate” of 1920, held at the Smithsonian Museum of Natural History in Washington DC, American astronomers Harlow Shapley and Heber Curtis clashed over whether the Milky Way was the entire universe, or just one of many galaxies. The evidence began to point to the latter, as Curtis had argued for. This realisation – that the universe contained not just billions of stars, but billions of galaxies each containing billions of stars – began to affect even the most pessimistic predictors of planetary prevalence.

In the 1940s, two things caused the scientific consensus to pivot dramatically. First, Jeans’ tidal hypothesis did not stand up to scientific scrutiny. The leading theories now had planet formation as a natural byproduct of star formation itself, opening up the potential for all stars to host planets.

Then in 1943, claims emerged of planets orbiting the stars 70 Ophiuchus and 61 Cygni c – two relatively nearby star systems visible to the naked eye. Both were later shown to be false positives, most likely due to uncertainties in the telescopic observations that were possible at the time – but nonetheless, it greatly influenced planetary thinking. Suddenly, billions of planets in the Milky Way was considered a genuine scientific possibility.

For us, nothing highlights this change in mindset more than an article written for the Scientific American in July 1943 by the influential American astronomer Henry Norris Russell. Whereas two decades earlier, Russell had predicted that planets “should be infrequent among the stars”, now the title of his article was: “Anthropocentrism’s Demise. New Discoveries Lead to the Probability that There Are Thousands of Inhabited Planets in our Galaxy”.

Strikingly, Russell was not merely making a prediction about any old planets, but inhabited ones. The burning question was: where were they? It would take another half-century to begin finding out.

View of two hi-tech telescopes with the sea beyond.
The Harps-N spectrograph is mounted on the Telescopio Nazionale de Galileo (left) in La Palma, Canary Islands.
lunamarina/Shutterstock

How to detect an exoplanet

When we observe myriad stars through La Palma’s Italian-built Galileo telescope using our Harps-N spectrograph, it is amazing to consider how far we have come since Mayor and Queloz announced their discovery of 51 Pegasi b in 1995. These days, we can effectively measure the masses of not just Jupiter-like planets, but even small planets thousands of light years away. As part of the Harps-N collaboration, we have had a front-row seat since 2012 in the science of small exoplanets.

Another milestone in this story came four years after the 51 Pegasi b discovery, when a Canadian PhD student at Harvard University, David Charbonneau, detected the transit of a known exoplanet. This was another hot Jupiter, known as HD209458b, also located in the Pegasus constellation, about 150 light years from Earth.

Transit refers to a planet passing in front of its star, from the perspective of the observer, momentarily making the star appear dimmer. As well as detecting exoplanets, the transit technique enables us to measure the radius of the planet by taking many brightness measurements of a star, then waiting for it to dim due to the passing planet. The extent of blocked starlight depends on the radius of the planet. For example, Jupiter would make the Sun 1% dimmer to alien observers, while for Earth, the effect would be a hundred times weaker.

In all, four times more exoplanets have now been discovered using this transit technique compared with the “barcode” technique, known as radial velocity, that the Swiss astronomers used to spot the first exoplanet 30 years ago. It is a technique that is still widely used today, including by us, as it can not only find a planet but also measure its mass.

A planet orbiting a star exerts a gravitational pull which causes that star to wobble back and forth – meaning it will periodically change its velocity with respect to observers on Earth. With the radial velocity technique, we take repeated measurements of the velocity of a star, looking to find a stable periodic wobble that indicates the presence of a planet.

These velocity changes are, however, extremely small. To put it in perspective, the Earth makes the Sun change its velocity by a mere 9cm per second – slower than a tortoise. In order to find planets with the radial velocity technique, we thus need to measure these small velocity changes for stars that are many many trillions of miles away from us.

The state-of-the-art instruments we use are truly an engineering feat. The latest spectrographs, such as Harps-N and also Espresso, can accurately measure velocity shifts of the order of tenths of centimetres per second – although still not sensitive enough to detect a true Earth twin.

But whereas this radial velocity technique is, for now, limited to ground-based observatories and can only observe one star at the time, the transit technique can be employed in space telescopes such as the French Corot (2006-14) and Nasa’s Kepler (2009-18) and Tess (2018-) missions. Between them, space telescopes have detected thousands of exoplanets in all their diversity, taking advantage of the fact we can measure stellar brightness more easily from space, and for many stars at the same time.

Despite the differences in detection success rate, both techniques continue to be developed. Applying both can give the radius and mass of a planet, opening up many more avenues for studying its composition.

To estimate possible compositions of our discovered exoplanets, we start by making the simplified assumption that small planets are, like Earth, made up of a heavy iron-rich core, a lighter rocky mantle, some surface water and a small atmosphere. Using our measurements of mass and radius, we can now model the different possible compositional layers and their respective thickness.

This is still very much a work in progress, but the universe is spoiling us with a wide variety of different planets. We’ve seen evidence of rocky worlds being torn apart and strange planetary arrangements that hint at past collisions. Planets have been found across our galaxy, from Sweeps-11b in its central regions (at nearly 28,000 light years away, one of the most distant ever discovered) to those orbiting our nearest stellar neighbour, Proxima Centauri, which is “only” 4.2 light years away.

Illustration of the exoplanet Proxima b
Illustration of Proxima b, one of the exoplanets orbiting the nearest star to our Sun, Proxima Centauri.
Catmando/Shutterstock

Searching for ‘another Earth’

In early July 2013, one of us (Christopher) was flying out to La Palma for my first “go” with the recently commissioned Harps-N spectrograph. Keen not to mess up, my laptop was awash with spreadsheets, charts, manuals, slides and other notes. Also included was a three-page document I had just been sent, entitled: Special Instructions for ToO (Target of Opportunity).

The first paragraph stated: “The Executive Board has decided that we should give highest priority to this object.” The object in question was a planetary candidate thought to be orbiting Kepler-78, a star a little cooler and smaller than our Sun, located about 125 light years away in the direction of the constellation Cygnus.

A few lines further down read: “July 4-8 run … Chris Watson” with a list of ten times to observe Kepler-78 – twice per night, each separated by a very specific four hours and 15 minutes. The name above mine was Didier Queloz’s (he hadn’t been awarded his Nobel prize yet, though).

This planetary candidate had been identified by the Kepler space telescope, which was tasked with searching a portion of the Milky Way to look for exoplanets as small as the Earth. In this case, it had identified a transiting planet candidate with an estimated radius of 1.16 (± 0.19) Earth radii – an exoplanet not that much larger than Earth had potentially been spotted.

I was in La Palma to attempt to measure its mass which, combined with the radius from Kepler, would allow the density and possible composition to be constrained. I wrote at the time: “Want 10% error on mass, to get a good enough bulk density to distinguish between Earth-like, iron-concentrated (Mercury), or water.”

In all, I took ten out of our team’s total of 81 exposures of Kepler-78 in an observing campaign lasting 97 days. During that time, we became aware of a US-led team who were also looking for this potential planet. In true scientific spirit, we agreed to submit our independent findings at the same time. On the specified date. Like a prisoner swap, the two teams exchanged results – which agreed. We had, within the uncertainties of our data, reached the same conclusion about the planet’s mass.

Its most likely mass came out as 1.86 Earth masses. At the time, this made Kepler-78b the smallest extrasolar planet with an accurately measured mass. The density was almost identical to that of Earth’s.

But that is where the similarities to our planet ended. Kepler-78b has a “year” that lasts only 8.5 hours, which is why I had been instructed to observe it every 4hr 15min – when the planet was at opposite sides of its orbit, and the induced “wobble” of the star would be at its greatest. We measured the star wobbling back and forth at about two metres per second – no more than a slow jog.

Kepler-78b’s short orbit meant its extreme temperature would cause all rock on the planet to melt. It may have been the most Earth-like planet found at the time in terms of its size and density, but otherwise, this hellish lava world was at the very extremes of our known planetary population.

Illustration of the exoplanet Kepler-78b
Illustration of the Kepler-78b ‘lava world’ – similar in size and density to Earth.
simoleonh/Shutterstock

In 2016, the Kepler space telescope made another landmark discovery: a system with at least five transiting planets around a Sun-like star, HIP 41378, in the Cancer constellation. What made it particularly exciting was the location of these planets. Where most transiting planets we have spotted are closer to their star than Mercury is to the Sun (due to our detection capabilities), this system has at least three planets beyond the orbital radius of Venus.

Having decided to use our Harps-N spectrograph to measure the masses of all five transiting planets, it became clear after more than a year of observing that one instrument would not be enough to analyse this challenging mix of signals. Other international teams came to the same conclusion and, rather than compete, we decided to come together in a global collaboration that holds strong to this day, with hundreds of radial velocities gathered over many years.

We now have firm masses and radii for most of the planets in the system. But studying them is a game of patience. With planets much further away from their host star, it takes much longer before there is a new transit event or the periodic wobble can be fully observed. We thus need to wait multiple years and gather lots of data to gain insight in this system.

The rewards are obvious, though. This is the first system that starts resembling our Solar System. While the planets are a bit larger and more massive than our rocky planets, their distances are very similar – helping us to understand how planetary systems form in the universe.

The holy grail for exoplanet explorers

After three decades of observing, a wealth of different planets have emerged. We started with the hot Jupiters, large gas giants close to their star that are among the easiest planets to find due to both deeper transits and larger radial velocity signals. But while the first tens of discovered exoplanets were all hot Jupiters, we now know these planets are actually very rare.

With instrumentation getting better and observations piling up, we have since found a whole new class of planets with sizes and masses between those of Earth and Neptune. But despite our knowledge of thousands of exoplanets, we still have not found systems truly resembling our solar system, nor planets truly resembling Earth.

It is tempting to conclude this means we are a unique planet in a unique system. While this still could be true, it is unlikely. The more reasonable explanation is that, for all our stellar technology, our capabilities of detecting such Earth-like planets are still fairly limited in a universe so mind-bogglingly vast.

The holy grail for many exoplanet explorers, including us, remains to find this true Earth twin – a planet with a similar mass and radius as Earth’s, orbiting a star similar to the Sun at a distance similar to how far we are from the Sun.

While the universe is rich in diversity and holds many planets unlike our own, discovering a true Earth twin would be the best place to start looking for life as we know it. Currently, the radial velocity method – as used to find the very first exoplanet – remains by far the best-placed method to find it.

Thirty years on from that Nobel-winning discovery, pioneering planetary explorer Didier Queloz is taking charge of the very first dedicated radial velocity campaign to go in search of an Earth-like planet.

A major international collaboration is building a dedicated instrument, Harps3, to be installed later this year at the Isaac Newton Telescope on La Palma. Given its capabilities, we believe a decade of data should be enough to finally discover our first Earth twin.

Unless we are unique after all.


For you: more from our Insights series:

To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation’s evidence-based news. Subscribe to our newsletter.

The Conversation

Christopher Watson receives funding from the Science and Technology Facilities Council (STFC).

Annelies Mortier receives funding from the Science and Technology Facilities council (STFC) and UK Research and Innovation (UKRI).

ref. Our quest to find a truly Earth-like planet in deep space – https://theconversation.com/our-quest-to-find-a-truly-earth-like-planet-in-deep-space-266550

The world’s most sensitive computer code is vulnerable to attack. A new encryption method can help

Source: The Conversation – Global Perspectives – By Qiang Tang, Associate Professor, Computer Science, University of Sydney

Joan Gammell/Unsplash

Nowadays data breaches aren’t rare shocks – they’re a weekly drumbeat. From leaked customer records to stolen source code, our digital lives keep spilling into the open.

Git services are especially vulnerable to cybersecurity threats. These are online hosting platforms that are widely used in the IT industry to collaboratively develop software, and are home to most of the world’s computer code.

Just last week, hackers reportedly stole about 570 gigabytes of data from a git service called GitLab. The stolen data was associated with major companies such as IBM and Siemens, as well as United States government organisations.

In December 2022, hackers stole source code from IT company Okta which was stored in repositories on GitHub.

Cyberattackers can also quietly insert malicious code into existing projects without a developer’s knowledge. These so-called “software supply-chain” attacks have turned development tools and update channels on git services into high-value targets.

As we explain in a new conference paper, our team has developed a new way to make git services more secure, with very little impact on performance.

The gold standard

We already know how to keep conversations private: secure messenger services such as Signal and WhatsApp use end-to-end encryption, which locks messages on your device and only unlocks them on the recipient’s device. This protects the data even if the service platform is hacked, which is why it’s considered the gold standard to protect data.

But git services, which are widely used by major tech companies and startups, currently don’t use end-to-end encryption. The same is true for most of the other tools we use to work together, such as shared documents.

Because git services allow a huge number of collaborators to work on the same project at the same time, the software codes they host are constantly written and updated at a very rapid rate. This makes using standard encryption impractical. To do so would take up too much bandwidth to transmit all of the data for even one word change, and make the services very inefficient.

But our new encryption method overcomes this challenge.

Striking an important balance

The method we have developed uses what’s known as “character-level encryption”. This means only edits to a software code stored on the git service are treated as new data to be encrypted – rather than the entire code.

Think of it as encrypting the tracked changes in a word document, instead of a new version every time.

This method strikes an important balance. It keeps the updated code private and secure while reducing the amount of communication between user and git services, as well as the amount of storage required.

Importantly, this new method is also compatible with existing git services, making it easy for people to adopt. It also doesn’t interfere with other functions of git servers, such as hosting, saving bandwidth and indexing, so people can keep using these servers as they normally would – just with the added benefit of extra security.

A broader end-to-end encrypted internet

This new tool is currently free and open-source for all users. It can be installed easily like a patch when using git services, and will run in the background as users access git services just like before.

But this is just the starting point for a broader shift towards online collaboration that is secured by end-to-end encryption.

Extending the same guarantees to shared documents, spreadsheets and design files is possible, but will require sustained research and investment.

One complication to ensure security is managing encryption keys or credentials for users to decrypt encrypted data. Fortunately, our previous research shows us how to create a secure cloud storage system that will allow users to safely store their credentials.

Just as importantly, we must balance security with compliance and accountability. Universities, hospitals and government agencies are required to retain and, in some cases, provide lawful access to certain data. Meeting these obligations, without weakening end-to-end encryption, pushes us to research new techniques.

The goal is not secrecy at all costs, but verifiable controls that respect both privacy and the rule of law.

We don’t need a brand new internet to get there. We need pragmatic upgrades that fit the tools people already use – paired with clear, provable guarantees.

Messaging proved that end-to-end encryption can scale to billions. Code and cloud files are next, and with continued research and targeted investment, the rest of our everyday collaboration can follow.

So before too long, you will hopefully be able to work on a shared document with colleagues with the peace of mind that it, too, has gold standard security.

The Conversation

Qiang Tang receives funding from Google via Digital Future Initiative to support the research on this project.

Moti Yung works for Google as a distinguished research scientist.

Yanan Li is supported by the funding from Google via Digital Future Initiative for doing this research at the University of Sydney.

ref. The world’s most sensitive computer code is vulnerable to attack. A new encryption method can help – https://theconversation.com/the-worlds-most-sensitive-computer-code-is-vulnerable-to-attack-a-new-encryption-method-can-help-266236

Today’s AI hype has echoes of a devastating technology boom and bust 100 years ago

Source: The Conversation – Global Perspectives – By Cameron Shackell, Sessional Academic, School of Information Systems, Queensland University of Technology

A crowd gathers outside the New York Stock Exchange following the ‘Great Crash’ of October 1929. New York World-Telegram and the Sun Newspaper Photograph Collection, US Library of Congress

The electrification boom of the 1920s set the United States up for a century of industrial dominance and powered a global economic revolution.

But before electricity faded from a red-hot tech sector into invisible infrastructure, the world went through profound social change, a speculative bubble, a stock market crash, mass unemployment and a decade of global turmoil.

Understanding this history matters now. Artificial intelligence (AI) is a similar general purpose technology and looks set to reshape every aspect of the economy. But it’s already showing some of the hallmarks of electricity’s rise, peak and bust in the decade known as the Roaring Twenties.

The reckoning that followed could be about to repeat.

First came the electricity boom

A century ago, when people at the New York Stock Exchange talked about the latest “high tech” investments, they were talking about electricity.

Investors poured money into suppliers such as Electric Bond & Share and Commonwealth Edison, as well as companies using electricity in new ways, such as General Electric (for appliances), AT&T (telecommunications) and RCA (radio).

It wasn’t a hard sell. Electricity brought modern movies, new magazines from faster printing presses, and evenings by the radio.

It was also an obvious economic game changer, promising automation, higher productivity, and a future full of leisure and consumption. In 1920, even Soviet revolutionary leader Vladimir Lenin declared: “Communism is Soviet power plus the electrification of the whole country.”

Today, a similar global urgency grips both communist and capitalist countries about AI, not least because of military applications.

A cover story of the New York Times Magazine in October 1927.
The New York Times

Then came the peak

Like AI stocks now, electricity stocks “became favorites in the boom even though their fundamentals were difficult to assess”.

Market power was concentrated. Big players used complex holding structures to dodge rules and sell shares in basically the same companies to the public under different names.

US finance professor Harold Bierman, who argued that attempts to regulate overpriced utility stocks were a direct trigger for the crash, estimated that utilities made up 18% of the New York Stock Exchange in September 1929. Within electricity supply, 80% of the market was owned by just a handful of holding firms.

But that’s just the utilities. As today with AI, there was a much larger ecosystem.

Almost every 1920s “megacap” (the largest companies at the time) owed something to electrification. General Motors, for example, had overtaken Ford using new electric production techniques.

Essentially, electricity became the backdrop to the market in the same way AI is doing, as businesses work to become “AI-enabled”.

No wonder that today tech giants command over a third of the S&P 500 index and nearly three-quarters of the NASDAQ. Transformative technology drives not only economic growth, but also extreme market concentration.

In 1929, to reflect the new sector’s importance, Dow Jones launched the last of its three great stock averages: the electricity-heavy Dow Jones Utilities Average.

But then came the bust

The Dow Jones Utilities Average went as high as 144 in 1929. But by 1934, it had collapsed to just 17.

No single cause explains the New York Stock Exchange’s unprecedented “Great Crash”, which began on October 24 1929 and preceded the worldwide Great Depression.

That crash triggered a banking crisis, credit collapse, business failures, and a drastic fall in production. Unemployment soared from just 3% to 25% of US workers by 1933 and stayed in double figures until the US entered the second world war in 1941.

Lithograph of Wall Street, New York City, with panicked crowd, lightning, people jumping out of buildings, buildings falling, at time of stock market crash in 1929.

Lithograph of Wall Street, New York City, after the 1929 stock market crash. Jame Rosenberg, Ben and Beatrice Goldstein Foundation collection, US Library of Congress

The ripple effects were global, with most countries seeing a rise in unemployment, especially in countries reliant on international trade, such as Chile, Australia and Canada, as well as Germany.

The promised age of shorter hours and electric leisure turned into soup kitchens and bread lines.

The collapse exposed fraud and excess. Electricity entrepreneur Samuel Insull, once Thomas Edison’s protégé and builder of Chicago’s Commonwealth Edison, was at one point worth US$150 million – an even more staggering amount at the time.

But after Insull’s empire went bankrupt in 1932, he was indicted for embezzlement and larceny. He fled overseas, was brought back, and eventually acquitted – but 600,000 shareholders and 500,000 bondholders lost everything.

However, to some Insull seemed less a criminal mastermind than a scapegoat for a system whose flaws ran far deeper.

Reforms unthinkable during the boom years followed.

The Public Utility Holding Company Act of 1935 broke up the huge holding company structures and imposed regional separation. Once exciting electricity darlings became boring regulated infrastructure: a fact reflected in the humble “Electric Company” square on the original 1935 Monopoly board.

Lessons from the 1920s for today

AI is rolling out faster than even those seeking to use it for business or government policy can sometimes manage properly.

Like electricity a century ago, a few interconnected firms are building today’s AI infrastructure.

And like a century ago, investors are piling in – though many don’t know the extent of their exposure through their superannuation funds or exchange traded funds (ETFs).

Just as in the late 1920s, today’s regulation of AI is still loose in many parts of the world – though the European Union is taking a tougher approach with its world-first AI law.

US President Donald Trump has taken the opposite approach, actively cutting “onerous regulation” of AI. Some US states have responded by taking action themselves. The courts, when consulted, are hamstrung by laws and definitions written for a different era.

Can we transition to AI being invisible infrastructure like electricity without a another bust, only then followed by reform?

If the parallels to the electrification boom remain unnoticed, the chances are slim.

The Conversation

Cameron Shackell works primarily as a Sessional Academic at the QUT School of Information Systems. He also works one day a week as CEO of Equate IT Consulting, a firm using AI to analyse brands and trademarks.

ref. Today’s AI hype has echoes of a devastating technology boom and bust 100 years ago – https://theconversation.com/todays-ai-hype-has-echoes-of-a-devastating-technology-boom-and-bust-100-years-ago-265492

Would you watch a film with an AI actor? What Tilly Norwood tells us about art – and labour rights

Source: The Conversation – Global Perspectives – By Amy Hume, Lecturer In Theatre (Voice), Victorian College of the Arts, The University of Melbourne

Particle6 Productions

Tilly Norwood officially launched her acting career this month at the Zurich Film Festival.

She first appeared in the short film AI Commissioner, released in July. Her producer, Eline Van der Velden, claims Norwood has already attracted the attention of multiple agents.

But Norwood was generated with artificial intelligence (AI). The AI “actor” has been created by Xicoia, the AI branch of the production company Particle6, founded by the Dutch actor-turned-producer Ven der Velden. And AI Commissioner is an AI-generated short film, written by ChatGPT.

A post about the film’s launch on Norwood’s Facebook page read,

I may be AI generated, but I’m feeling very real emotions right now. I am so excited for what’s coming next!

The reception from the industry has been far from warm. Actors – and audiences – have come out in force against Norwood.

So, is this the future of film, or is it a gimmick?

‘Tilly Norwood is not an actor’

Norwood’s existence introduces a new type of technology to Hollywood. Unlike CGI (computer generated imagery), where a performer’s movements are captured and transformed into a digital character, or an animation which is voiced by a human actor, Norwood has no human behind her performance. Every expression and line delivery is generated by AI.

Norwood has been trained on the performances of hundreds of actors, without any payment or consent, and draws on the information from all those performances in every expression and line delivery.

Her arrival comes less than two years after the artist strikes that brought Hollywood to a stand-still, with AI a central issue to the disputes. The strike ended with a historic agreement placing limitations around digital replicas of actors’ faces and voices, but did not completely ban “synthetic fakes”.

SAG-AFTRA, the union representing actors in the United States, has said:

To be clear, ‘Tilly Norwood’ is not an actor; it’s a character generated by a computer program that was trained on the work of countless professional performers – without permission or compensation.

Additionally, real actors can set boundaries and are protected by agents, unions and intimacy coordinators who negotiate what is shown on screen.

Norwood can be made to perform anything in any context – becoming a vessel for whatever creators or producers choose to depict.

This absence of consent or control opens a dangerous pathway to how the (digitally reproduced) female body may be represented on screen, both in mainstream cinema, and in pornography.

Is it art?

We consider creativity to be a human quality. Art is generally understood as an expression of human experience. Norwood’s performances do not come from such creativity or human experience, but from a database of pre-existing performances.

All artists borrow from and are influenced by predecessors and contemporaries. But that human influence is limited by time, informed by our own experiences and shaped by our unique perspective.

AI has no such limits: just look at Google’s chess-playing program AlphaZero, which learnt by playing millions of games of chess, more than any human can play in a life time.

Norwood stands with a clapboard.
Norwood’s training can absorb hundreds of performances in a way no single actor could.
Particle6 Productions

Norwood’s training can absorb hundreds of performances in a way no single actor could. How can that be compared to an actor’s performance – a craft they have developed throughout their training and career?

Van der Velden argues Norwood is “a new tool” for creators. Tools have previously been a paintbrush or a typewriter, which have helped facilitate or extend the creativity of painting or writing.

Here, Norwood as the tool performs the creative act itself. The AI is the tool and the artist.

Will audiences accept AI actors?

Norwood’s survival depends not on industry hype but on audience reception.

So far, humans show a negative bias against AI-generated art. Studies across art forms have shown people prefer works when told they were created by humans, even if the output is identical.

We don’t know yet if that bias could fade. A younger generation raised on streaming may be less concerned with whether an actor is “real” and more with immediate access, affordability or how quickly they can consume the content.

If audiences do accept AI actors, the consequences go beyond taste. There would be profound effects on labour. Entry- and mid-level acting jobs could vanish. AI actors could shrink the demand for whole creative teams – from make-up and costume to lighting and set design – since their presence reduces the need for on-set artistry.

Economics could prove decisive. For studios, AI actors are cheaper, more controllable and free from human needs or unions. Even if audiences are ambivalent, financial pressures could steer production companies towards AI.

The bigger picture

Tilly Norwood is not a question of the future of Hollywood. She is a cultural stress-test – a case study in how much we value human creativity.

What do we want art to be? Is it about efficiency, or human expression? If we accept synthetic actors, what stops us from replacing other creative labour – writers, musicians, designers – with AI trained on their work, but with no consent or remuneration?

We are at a crossroads. Do we regulate the use of AI in the arts, resist it, or embrace it?

Resistance may not be realistic. AI is here, and some audiences will accept it. The risk is that in choosing imitation over human artistry, we reshape culture in ways that cannot be easily reversed.

The Conversation

Amy Hume does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Would you watch a film with an AI actor? What Tilly Norwood tells us about art – and labour rights – https://theconversation.com/would-you-watch-a-film-with-an-ai-actor-what-tilly-norwood-tells-us-about-art-and-labour-rights-266476

These 4 aeroplane failures are more common than you think – and not as scary as they sound

Source: The Conversation – Global Perspectives – By Guido Carim Junior, Senior Lecturer in Aviation, Griffith University

redcharlie/Unsplash

“It is the closest all of us passengers ever want to come to a plane crash,”
a Qantas flight QF1889’s passenger said after the plane suddenly descended about 20,000 feet on Monday September 22, and diverted back to Darwin.

The Embraer 190’s crew received a pressurisation warning, followed the procedures, and landed normally – but in the cabin, that rapid drop felt anything but normal.

The truth is, in-flight technical problems such as this one are part of flying. Pilots train extensively for them. Checklists contain detailed instructions on how to deal with each issue. Aircraft are built with layers of redundancy, and warning systems alert pilots to problems. It is because of these safety systems that the vast majority of flights that experience technical issues end with a safe arrival rather than tragic headlines.

Here are four scary-sounding failures you might hear about (or even experience) and how they are actually dealt with in the air.

1. Air-conditioning and pressurisation hiccups

What it is

At cruising altitudes (normally around 36,000 feet), aeroplane cabins are kept at a comfortable “cabin altitude” of 8,000 feet using air from the engines that is cooled through the air conditioner.

This artificial air pressure allows us to survive while the atmosphere outside the plane is highly hostile to human life, with temperatures around -55°C and no breathable air. However, if the system misbehaves or the cabin altitude starts to rise for whatever reason, crews treat it as a potential pressurisation problem and initiate the preventive procedures immediately.

What you might feel/see

A quick, controlled descent (it can feel dramatic), ears popping, and sometimes oxygen masks – these typically drop automatically only if the cabin altitude exceeds roughly 14,000 feet. Similar to QF1889, a rapid descent without masks being deployed is the most common outcome.

What pilots do

As soon as they notice a problem with the cabin pressurisation, the pilots put on their own oxygen masks, declare an emergency, and follow the emergency descent checklist, bringing the aircraft as quickly as possible to about 10,000 feet. This is usually followed by a diversion or return to the departure airport.

2. Most feared: engine failures

What it is

Twin-engine airliners are certified to fly safely on one engine. Yet, one-engine failures are treated seriously and thoroughly rehearsed in flight simulators at least annually.

Dual failures, however, are exceptionally rare. The 2009 “Miracle on the Hudson”, for example, was a once-in-a-generation bird strike event that led to both engines stopping. The plane safely landed on the Hudson River in New York with no casualties.

US Airways Flight 1549 after crashing into the Hudson River, January 15 2009.
Wikimedia Commons, CC BY

What you might feel/see

A loud bang, vibration, sparks coming out of the engine, smell of burning or a sudden quietening. This may result in a turn-back and an emergency services welcome. Recent headlines on engine failures – from a 737 in Sydney to a multiple bird-strike-related return in the United States ended with safe landings.

What pilots do

After being alerted by the warning system, pilots identify the affected engine and follow the checklist. The checklist typically requires them to shut down the problematic engine, descent to an appropriate altitude and divert if in cruise, or return to the departure airport if after takeoff.

Even when an engine failure damages other systems, crews are trained to manage cascades of warnings – as Qantas A380 flight QF32’s crew did in 2010, returning safely to Singapore.

3. Hydraulic trouble and flight controls

What it is

The many aeroplane flight controls move because of multiple hydraulic or electric systems. If one system misbehaves – for example the left wing aileron, which is used to turn the aircraft, won’t move – redundancy keeps the aeroplane flyable because the right wing aileron will still work.

Crews use specific checklists and adjust speeds, distances and landing configurations to ensure a safe return to the ground.

Ailerons are the hinged parts you can see at the end of the aeroplane wing.
Stephan Hinni/Unsplash

What you might feel/see

A longer hold while the crew troubleshoots, a return to the departure airport or a faster-than-normal landing. In July, a regional Qantas flight to Melbourne made an emergency landing at Mildura after a hydraulics issue.

What pilots do

After the warning system’s detection, pilots run through a checklist, decide on the landing configuration, request the longest suitable runway and emergency services just in case.

All these resources are available because lessons learned from extreme events – such as United 232’s 1989 loss of all hydraulic systems – were brought into the design of modern aeroplanes and training programs.

4. Landing gear and brake system drama

What it is

Airliners have retractable landing gears that remain inside a compartment for most of the flight. Those are the wheels that come out of the aeroplane belly before landing. Assembled in the wheels are the brakes. They aim to reduce the aircraft speed after touchdown, like in a car.

With so many moving parts, sometimes the landing gear doesn’t extend or retract properly, or the braking system loses some effectiveness, such as the loss of a hydraulic system.

What you might feel/see

A precautionary return, cabin preparation for potential forced landing, or “brace for impact” instruction from the cabin crew right before landing can happen.

While scary, these are preventive measures if something doesn’t go as planned. Earlier this year, a Qantas flight returned to Brisbane after experiencing a problem with its landing gear; passengers were told to keep “heads down” while the aircraft landed safely.

What pilots do

They’ll use long checklists and eventually contact maintenance engineers to troubleshoot the problem. There are also redundancies available to lower the landing gear and to deploy the brakes.

In extreme cases, they may be required to land at the longest runway available (in case of brake problems) or land on the belly (if the landing gear can’t be lowered).

The big picture

Most in-flight failures trigger a chain of defences aimed at keeping the flight safe. Checklists, extensive training and decades of expertise are backed by multiple redundancies and robust design. And these flights typically end like QF1889 did: safely on the ground, with passengers a little shaken.

A dramatic descent or an urgent landing doesn’t mean disaster. It usually means the safety system (aircraft + crew + checklist + training + redundancy) is doing exactly what it’s supposed to do.

The Conversation

Guido Carim Junior does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. These 4 aeroplane failures are more common than you think – and not as scary as they sound – https://theconversation.com/these-4-aeroplane-failures-are-more-common-than-you-think-and-not-as-scary-as-they-sound-265866

People trust podcasts more than social media. But is the trust warranted?

Source: The Conversation – Global Perspectives – By Jason Weismueller, Lecturer, UWA Business School, The University of Western Australia

Medy Siregar/Unsplash

There’s been a striking decline in public confidence in social media platforms, according to the 2025 Ethics Index published by the Governance Institute of Australia. One in four Australians now rate social media as “very unethical”.

This is consistent with other reports on Australian attitudes towards social media. For example, the Digital News Report 2025 similarly identified widespread concern about misinformation and distrust in news shared on social media.

And such distrust isn’t limited to Australia. The sentiment is evident worldwide. The 2025 Edelman Trust Barometer, based on an annual global survey of more than 30,000 people across 28 countries, reports a decline in trust in social media companies.

So where does this negativity come from? And are other ways of consuming information online, such as podcasts, any better? Podcasts are booming in Australia and around the world, and are often perceived much more positively than social media.

Let’s look at what the evidence says about the impacts of social media, what it does and doesn’t yet tell us about podcasts, and what this reveals about the need for accountability across digital platforms.

Where does this distrust stem from?

While social media has enabled connection, creativity and civic participation, research also highlights its downsides.

Studies have shown that, on certain social media platforms, false and sensational information can often spread faster than truth. Such information can also fuel negativity and political polarisation.

Beyond civic harms, heavy social media use has also been linked to mental health challenges. The causes are difficult to establish, but studies report associations between social media use and higher levels of depression, anxiety and psychological distress, particularly among adolescents and young adults.

In 2021, Frances Haugen, a former Facebook product manager, made public thousands of internal documents that revealed Instagram’s negative impact on teen mental health. The revelations triggered global scrutiny and intensified debate about social media accountability.

Whistleblowers such as Haugen suggest social media companies are aware of potential harms, but don’t always act.




Read more:
Facebook data reveal the devastating real-world harms caused by the spread of misinformation


Podcasts have a much better reputation

In contrast to social media, podcasts appear to enjoy a very different reputation. Not only do Australians view them far more positively, but podcast consumption has significantly increased over the years.

More than half of Australians over the age of ten engage with audio or video podcasts on a monthly basis. It’s not surprising that the 2025 Australian election saw political leaders feature on podcasts as part of their campaign strategy.

YouTube, traditionally a video sharing platform, has a large section dedicated to podcasts on its home page.
YouTube

Why are podcasts so popular and trusted? Several features may help explain this.

Consumption is often more deliberate. Listeners choose specific shows and episodes instead of scrolling through endless feeds. Podcasts typically provide longer and more nuanced discussions compared with the short snippets served by social media algorithms.

Given these features, research suggests podcasts foster a sense of intimacy and authenticity. Listeners develop ongoing “relationships” with hosts and view them as credible, authentic and trustworthy.

Yet this trust can be misplaced. A Brookings Institution study analysing more than 36,000 political podcast episodes found nearly 70% contained at least one unverified or false claim. Research also shows political podcasts often rely on toxic or hostile language.

This shows that podcasts, while often perceived as more “ethical” than social media, are not automatically safer or more trustworthy spaces.

Rethinking trust in a complex media environment

What’s clear is that we shouldn’t blindly trust or dismiss any online platform, whether it’s a social media feed or a podcast. We must think critically about all the information we encounter.

We all need better tools to navigate a complex media environment. Digital literacy efforts must expand beyond social media to help people assess any information, from a TikTok clip to a long-form podcast episode.




Read more:
Critical thinking is more important than ever. How can I improve my skills?


To regain public trust, social media platforms will have to behave more ethically. They should be transparent about advertising, sponsorships and moderation policies, and should make clear how content is recommended.

This expectation should also apply to podcasts, streaming services and other digital media, which can all be misused by people who want to mislead or harm others.

Governments can reinforce accountability through fair oversight, but rules will only work if they are paired with platforms acting responsibly.

Earlier this year, the Australian government released a report that argued social media platforms have a “duty of care” towards their users. They should proactively limit the spread of harmful content, for example.

A healthier information environment depends on sceptical but engaged citizens, stronger ethical standards across platforms, and systems of accountability that reward transparency and reliability.

The lesson is straightforward: trust or distrust alone doesn’t change whether the information you receive is actually truthful – particularly in an online environment where anyone can say anything. It’s best to keep that in mind.

The Conversation

Jason Weismueller does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. People trust podcasts more than social media. But is the trust warranted? – https://theconversation.com/people-trust-podcasts-more-than-social-media-but-is-the-trust-warranted-266791

On a grim anniversary, an end to Gaza’s violence is suddenly clear – if both sides can make sacrifices

Source: The Conversation – Global Perspectives – By Eyal Mayroz, Senior Lecturer in Peace and Conflict Studies, University of Sydney

Two years into the most horrific chapter in the history of Israel and Palestine, a glimmer of hope has been offered to both sides by US President Donald Trump’s plan for a permanent ceasefire and initial steps towards a faraway peace, or at least coexistence.

The plan at this stage is extremely vague, full of holes and strongly biased toward the Israeli side. However, it currently enjoys robust international support and legitimacy – arguably, stronger than any peace plan in the past two years.

It demands significant concessions from both sides, though much more so for Hamas, with punitive measures for failures to comply.

And it inches closer than any time in the past two years to halting the senseless mass killings and ongoing humanitarian catastrophe in Gaza.

What are the prospects for the plan’s success? What are the obstacles? And how could the world better support the prolonged and difficult process of trying to protect countless innocent lives in this part of the world?

On a grim anniversary, and with both sides exhausted, the answers require careful consideration.

Major concessions on both sides

For Hamas, there are a few immediate benefits to the proposed agreement: Israel’s promises to end the killings in Gaza, allow humanitarian aid to flow, and release numerous Palestinian prisoners.

At the same time, without timelines for a full Israeli withdrawal from Gaza and for Palestinian governance in the strip, there are also major risks.

The plan doesn’t include an explicit promise for a Palestinian state that will encompass both Gaza and the West Bank. And the demand for Hamas to disarm and stay out of Palestinian politics would not only abolish the group’s remaining power and influence, but leave its members at the mercy of Israel and American goodwill.

For Israel, the return of all hostages, both alive and dead, and the chance to begin to emerge from its diplomatic isolation and pariah status offer significant gains.

However, for the country’s hardline government and its political base, these gains would come at a cost. This includes:

  • the withdrawal from Gaza without fulfilling Prime Minister Benjamin Netanyahu’s promise to fully destroy Hamas

  • amnesty for Hamas militants who would renounce the armed struggle

  • the release of 2,000 Palestinian prisoners, including more than 250 with Israeli blood on their hands

  • forgoing annexation of Gaza and the West Bank, as promised by Trump

  • acknowledging, even if fuzzily, Palestinian aspirations for sovereignty and self-determination.

These concessions will be ideologically tough and politically destabilising for the current Israeli government.

Arab and Muslim support for the plan

Despite the challenges, the plan’s prospects have been enhanced by a number of factors, at least in the short term.

Key among them has been the overwhelming international support, especially among Arab and Muslim states. This has left Hamas more isolated than ever. The support by its long-time allies, Qatar and Turkey, would have been particularly hard for Hamas to swallow.

Notably, this support had been tested by last-minute Israeli changes to the draft that didn’t sit well with some of these states.

However, they grudgingly acquiesced to the changes, given the dire situation in Gaza, the exhaustion of the chief Arab mediators, Trump’s clout, and the realisation that no better plan was likely in the near future.

Also influential was the long-overdue US decision to force Netanyahu into important concessions on allowing aid into Gaza and ending the threats of ethnic cleansing and Israeli annexation of parts of the West Bank and Gaza.

The imminent October 10 deadline for a Nobel Peace Prize announcement may have contributed to Trump’s resolve to pressure both sides – especially Netanyahu – and to the tightly imposed schedule for the release of the plan.

Israeli hostages as a bargaining chip

For the past two years, Hamas’ main bargaining chip was the Israeli hostages it kidnapped on October 7 2023.

A key challenge posed by Trump’s plan is the dictate to release the remaining captives, dead and alive, within the first 72 hours of the deal coming into effect. This would be in exchange for roughly 2,000 Palestinian prisoners.

Effectively, this would not only abolish Hamas’ negotiating power, but also their threat to use the hostages as human shields against the current or a future Israeli military incursion into Gaza City.

However, according to the Israeli newspaper Haaretz, Hamas leaders in Qatar were recently persuaded to believe that keeping the hostages had become a liability because Netanyahu’s government was no longer primarily concerned about their safety and would use their likely presence in Gaza City as a pretext for their operations there.

This may have made the benefit of a hostage-prisoner swap more apparent for Hamas.

How the world can help

Beyond ending the immediate crisis in Gaza, the plan’s long-term viability depends on larger questions of Palestinian statehood and governance.

Despite being scorned early on by the US and Israel, an initiative launched in July by France and Saudi Arabia to push for wider recognition of Palestine helped lay the groundwork for the Trump peace plan.

The move extracted an explicit commitment by the Palestinian Authority (PA) to hold democratic elections and undertake other significant reforms. It also helped galvanise a unified Arab position on ending the conflict, with a consequential joint condemnation of Hamas’ October 7 attack and a demand the group disarm and relinquish power in Gaza.

Arguably, the success of Trump’s plan will depend not only on the fast-moving events in the coming days, but also on the international community’s ability to sustain its commitment to a complex peace process in the coming weeks, months and years.

Once the mass violence in Gaza has ended, international attention will wane and make it easier for either side to derail the process. This is why the efforts to recognise Palestinian statehood remain important. With support now from 157 of 193 UN member states – more than 80% of membership – this could increase the pressure on the US to avoid vetoing a full UN membership for Palestine in the future.

Agency matters

Western powers should also reconsider their demands that Hamas be barred from Palestinian politics as a condition for recognition. Genuine Palestinian sovereignty should include the fundamental right of the people to choose their own government through free and fair elections.

It’s inconsistent for these Western states to champion democratic values, independence and self-determination for the Palestinians, while simultaneously prescribing which parties can participate in their electoral process.

At the same time, third-party states have the right to articulate the potential diplomatic or economic consequences should Hamas be elected into a future Palestinian government. It would then be up to the Palestinian people to weigh the potential costs when casting their votes.

This approach could provide respect for Palestinian agency, while maintaining the principle that democratic choices carry real-world implications, both domestic and external.

The difficult path ahead

Ultimately, significant progress on the road to peace would require an Israeli government that’s willing to make hard choices and sacrifices. That government currently doesn’t exist.

But would the next government be more amenable? Arguably, this would depend on many interlocking factors – most importantly, American pressure and engagement, in addition to “carrots” in the form of normalisation agreements with Saudi Arabia and other Arab states. These factors could help shift domestic opinions and political calculus in Israel.

However, significant breakthroughs appear improbable before the country’s next elections, scheduled for 2026. Until then, Trump remains the only one capable of meaningfully influencing the cost-benefit calculations in Jerusalem.

Notably, a strong desire for a sense of security remains the most important consideration for Israelis, even if the means of achieving this are highly controversial.

Seen in this context, the demand in Trump’s plan that Hamas disarm and Gaza be demilitarised will be non-negotiable for any future Israeli government. Even then, extremist violence on either side would continue to pose the greatest threat to the prospects of co-existence.

On the positive side, history has shown that even in the most intractable conflicts, pathways to peace can be found when courage meets opportunity. The international community’s unprecedented unity, Trump’s new willingness to pressure both Hamas and Israel, and the sheer exhaustion on both sides can create that opportunity.

If this moment could be sustained – if the world maintains its focus beyond the initial ceasefire, if moderates on both sides find their voices – then perhaps the glimmer of hope offered today may become a light.

The Conversation

Eyal Mayroz served as a counterterrorism specialist with the Israeli Defence Forces in the 1980s.

ref. On a grim anniversary, an end to Gaza’s violence is suddenly clear – if both sides can make sacrifices – https://theconversation.com/on-a-grim-anniversary-an-end-to-gazas-violence-is-suddenly-clear-if-both-sides-can-make-sacrifices-266771

Does AI pose an existential risk? We asked 5 experts

Source: The Conversation – Global Perspectives – By Aaron J. Snoswell, Senior Research Fellow in AI Accountability, Queensland University of Technology

Sean Gladwell/Getty Images

There are many claims to sort through in the current era of ubiquitous artificial intelligence (AI) products, especially generative AI ones based on large language models or LLMs, such as ChatGPT, Copilot, Gemini and many, many others.

AI will change the world. AI will bring “astounding triumphs”. AI is overhyped, and the bubble is about to burst. AI will soon surpass human capabilities, and this “superintelligent” AI will kill us all.

If that last statement made you sit up and take notice, you’re not alone. The “godfather of AI”, computer scientist and Nobel laureate Geoffrey Hinton, has said there’s a 10–20% chance AI will lead to human extinction within the next three decades. An unsettling thought – but there’s no consensus if and how that might happen.

So we asked five experts: does AI pose an existential risk?

Three out of five said no. Here are their detailed answers.

The Conversation

Aaron J. Snoswell was previously part of a research team that competitively won a grant to receive research project funding from OpenAI in 2024–2025 to develop new evaluation frameworks for measuring moral competence in AI agents. The project has now completed and I no longer receive funding from OpenAI.

Niusha Shafiabady, Sarah Vivienne Bentley, Seyedali Mirjalili, and Simon Coghlan do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Does AI pose an existential risk? We asked 5 experts – https://theconversation.com/does-ai-pose-an-existential-risk-we-asked-5-experts-266345

Why are the ICJ and ICC cases on Israel and Gaza taking so long?

Source: The Conversation – Global Perspectives – By Melanie O’Brien, Associate Professor of International Law, The University of Western Australia

In September this year, a UN-backed independent commission of inquiry released a report concluding Israel is committing genocide in Gaza. The report said:

Israeli authorities deliberately inflicted conditions of life on the Palestinians in Gaza calculated to destroy, in whole or in part, the Palestinians in Gaza, which is an underlying act of genocide.

This report followed two years of investigation, but it’s not the only investigation underway.

There are two international courts with current proceedings related to the Israel-Palestine conflict.

The first, a case before the International Court of Justice (ICJ), was brought against Israel by South Africa in late 2023.

In the second, International Criminal Court (ICC) prosecutors have been investigating potential crimes allegedly committed by anyone, whether Israeli or Palestinian, on the territory of Palestine since March 2021 – even before Hamas’ October 7 2023 attack.

So, if the UN-backed commission of inquiry could put together their report in two years, why are the cases in the ICJ and ICC taking so long? And where are these proceedings up to now?

The International Court of Justice case

A case before the ICJ often takes many years.

This is because the cases often involve multiple stages, including:

  • provisional measures (the ICJ version of an injunction, which is an interim court order to do or stop doing something)

  • preliminary objections (where a state may object to the ICJ’s jurisdiction in the case)

  • the merits case (where the court decides whether or not a country has violated international law).

Each stage involves the parties to the case making written submissions and undertaking oral proceedings. The court also makes decisions at each stage. States must be afforded due process throughout the proceedings.

Another reason for the lengthy period of cases is that states often ask for extensions for their written submissions.

In the South Africa v Israel case (which focused on the question of whether Israel is in breach of its obligation to prevent and punish genocide as per the Genocide Convention), Israel requested and was granted a six-month extension to file their written submission, which is now not due until January 2026.

This means we may not expect a hearing on the merits of the case until possibly even 2027.

The International Criminal Court case

Cases before the ICC, which are brought against individuals, not states, are not like ordinary criminal cases in a domestic court.

These cases relate to not just one crime, but many crimes. Sometimes, perpetrators are charged with multiple offences.

As an example, Dominic Ongwen – a high-ranking member of the Lord’s Resistance Army operating in Uganda – was convicted of 61 counts of war crimes and crimes against humanity, each of which generally involved multiple victims.

This means the ICC has to collect and present a huge amount of evidence. This can include documents, photographs, and victim and witness testimony. It can take a long time, even years, to collect all this evidence.

Once the case goes to court, it can take many months of hearings, as all the evidence is presented.

The case may also be delayed if either the prosecution or defence asks for an extension at any point in the proceedings.

All of these elements are important to ensure any trial before the ICC is fair and carried out with due process.

In the case relating to Palestine, the ICC prosecutor moved quite quickly with investigations following Hamas’ October 7 2023 attack.

Arrest warrants were issued for Israeli Prime Minister Benjamin Netanyahu and then-Defence Minister Yoav Gallant in November 2024, with charges of crimes against humanity (including murder and persecution) and the war crime of starvation.

At the same time, arrest warrants were also issued for several Hamas leaders for war crimes and crimes against humanity relating to the October 7 atrocities. Only one of those leaders, Mohammed Diab Ibrahim Al-Masri (Deif), remains likely alive, however.

The ICC now stands ready, willing and able to start a prosecution case against Netanyahu, Gallant or Hamas leaders such as Deif. All it needs is to have them in custody in The Hague.

However, the ICC has no police force. It relies on member states to the ICC to arrest and surrender wanted fugitives.

Interpol “Red Notices” may be issued for people wanted by the ICC. Recently, for instance, the Philippines arrested and surrendered its former president, Rodrigo Duterte, to the ICC, where he is now on trial for crimes against humanity.

Unfortunately, states seem less willing to arrest and surrender the Israeli head of state. This creates a challenge for the ICC in its ability to proceed with prosecutions, but also attracts criticism of double standards of states.

Netanyahu has visited Hungary, an ICC member state, but was not arrested. Hungary has since announced it intends to withdraw from the ICC.

Upholding international law

So, it’s clear the ICC and the ICJ already have legal proceedings well underway relating to crimes in Gaza. These international courts are ready to hear legal arguments and make decisions on state responsibility or individual criminal liability for crimes committed in Palestine or against Palestinians.

What we need, however, is commitment from states to uphold international law.

Countries must comply with their international law obligations and cooperate with international courts, including by arresting and surrendering wanted fugitives to the International Criminal Court.

This is what will help speed the slow-turning wheels of justice.

The Conversation

Melanie O’Brien is president of the International Association of Genocide Scholars (IAGS), which in September 2025 passed a resolution declaring that Israel is committing war crimes, crimes against humanity and genocide in Gaza.

ref. Why are the ICJ and ICC cases on Israel and Gaza taking so long? – https://theconversation.com/why-are-the-icj-and-icc-cases-on-israel-and-gaza-taking-so-long-265674

Is Sanae Takaichi Japan’s Margaret Thatcher — or its next Liz Truss?

Source: The Conversation – Global Perspectives – By Sebastian Maslow, Associate Professor, International Relations, Contemporary Japanese Politics & Society, University of Tokyo

Under the slogan “#ChangeLDP”, Japan’s long-ruling Liberal Democratic Party (LDP) has elected Sanae Takaichi as its new leader. Pending a vote in the Diet’s lower house later this month, she is poised to become Japan’s next prime minister — and the first woman ever to hold the post.

At first glance, this appears historic. Takaichi is not only the LDP’s first female leader, but also one of the few postwar politicians to rise without inheriting a family seat. In a political culture dominated by male dynasties, her ascent seems to signal long-overdue change. In a country long criticised for gender inequality, it is a powerful image of progress.

In reality, however, Takaichi’s rise reflects a return to familiar politics. Her predecessor, Shigeru Ishiba, resigned after a year in office following electoral defeats. Those losses were not solely his doing. Ishiba had vowed to reform the LDP after scandals over ties to the Unification Church and slush funds, but he faced entrenched resistance.

As the party’s old factions re-emerged, senior figures rallied behind Takaichi’s leadership bid, reasserting the factional networks that have long defined Japanese conservatism. Takaichi has already signalled a return of the party’s old elite to the centre of power, while moving to end efforts to hold those involved in past scandals accountable.

Takaichi’s victory signals a party operating in crisis mode. In recent months, the LDP has lost voters to new populist right-wing parties such as Sanseito. To stop the bleeding, it has shifted toward a harder conservative line.

This pattern of “crisis and compensation” is not new. In the 1970s, threatened by the left, conservatives adopted welfare and environmental policies to retain power. Today, facing challenges from the populist right, the LDP has leaned on nationalism, anti-immigration rhetoric and historical revisionism.

A self-described social conservative, Takaichi opposes allowing married couples to retain separate surnames and rejects female succession to the imperial throne. She has expressed admiration for former British prime minister Margaret Thatcher, though whether her premiership will prove equally transformative remains to be seen.

A close ally of the late Shinzo Abe, Takaichi is widely viewed as the torchbearer of his political legacy. Economically, she pledges to continue the expansionary fiscal and monetary policies of “Abenomics”, prioritising growth over fiscal restraint.

With Japan’s debt-to-GDP ratio exceeding 260%, Takaichi has remained vague about how she would sustainably finance her plans to ease economic pressures on households.

Politically, she seeks to complete Abe’s project of “taking Japan back” from the constraints of the postwar regime, by revising the pacifist constitution and strengthening national defence.

In foreign policy, Takaichi supports Abe’s vision of a “Free and Open Indo-Pacific”. She advocates deeper cooperation with the United States and within the Quad, comprised of the US, Australia, Japan and India. She also supports stronger regional partnerships to bolster deterrence.

Her hawkish stance on China and North Korea aligns with this agenda. She has vowed to increase defence spending — a move likely welcomed by the Trump administration in the US, which has urged Tokyo to approach NATO’s 5% benchmark. Japan’s defence budget is currently about 1.8% of GDP.

Takaichi also inherits a pending trade deal with Washington involving a Japanese investment package worth US$550 billion (A$832 billion), though many details remain unresolved.

Meanwhile, her record of visiting the controversial Yasukuni Shrine — which honours Japan’s war dead, including convicted war criminals — risks undoing recent progress in relations with South Korea and inflaming tensions with China. Such moves could undercut Japan’s efforts to act as a stabilising force in regional security.

Domestically, Takaichi’s greatest challenge will be to unite a fragmented LDP while addressing an increasingly frustrated electorate. Voters facing stagnant wages and rising living costs may have little patience for ideological battles.

Her incoming cabinet will also face a divided Diet (Japan’s parliament), where the LDP lacks majorities in both chambers. Expanding the ruling coalition is one option, but the LDP’s long-time partner Komeito remains wary of constitutional revision and nationalist policies. Takaichi has already hinted at courting newer populist parties that share her support for an anti-espionage law and tighter immigration controls.

In many respects, Takaichi’s rise encapsulates the LDP’s enduring survival strategy — adaptation without reinvention. The party’s claim to renewal masks a deeper continuity: reliance on charismatic conservative figures to preserve authority amid voter fatigue and opposition weakness. Her leadership may consolidate the LDP’s right-wing base, but offers little sign of institutional reform or ideological diversity.

So whether her premiership brings transformation or merely reinforces old patterns remains uncertain. Her commitment to economic stimulus may buy time, but Japan’s deeper structural challenges — ageing demographics, inequality, and regional decline — demand creativity the LDP has long deferred. If Takaichi focuses instead on constitutional revision and identity politics, she risks alienating centrist voters and exhausting public patience for culture wars.

A visit from US President Donald Trump later this month and series of regional summits will provide her first diplomatic test. It will also offer a glimpse of how she balances assertive foreign policy with domestic credibility. Much will depend on her ability to convince a sceptical electorate that her leadership represents more than another chapter in the LDP’s politics of survival.

If she succeeds, Takaichi could redefine Japanese conservatism and secure a lasting legacy as her country’s first female prime minister. If she fails, the comparison to “Japan’s Margaret Thatcher” may quickly fade — replaced by that of Liz Truss, another short-lived leader undone by party division and unmet expectations.

The Conversation

Sebastian Maslow does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Is Sanae Takaichi Japan’s Margaret Thatcher — or its next Liz Truss? – https://theconversation.com/is-sanae-takaichi-japans-margaret-thatcher-or-its-next-liz-truss-266478