How does AI affect how we learn? A cognitive psychologist explains why you learn when the work is hard

Source: The Conversation – USA (2) – By Brian W. Stone, Associate Professor of Cognitive Psychology, Boise State University

When OpenAI released “study mode” in July 2025, the company touted ChatGPT’s educational benefits. “When ChatGPT is prompted to teach or tutor, it can significantly improve academic performance,” the company’s vice president of education told reporters at the product’s launch. But any dedicated teacher would be right to wonder: Is this just marketing, or does scholarly research really support such claims?

While generative AI tools are moving into classrooms at lightning speed, robust research on the question at hand hasn’t moved nearly as fast. Some early studies have shown benefits for certain groups such as computer programming students and English language learners. And there have been a number of other optimistic studies on AI in education, such as one published in the journal Nature in May 2025 suggesting that chatbots may aid learning and higher-order thinking. But scholars in the field have pointed to significant methodological weaknesses in many of these research papers.

Other studies have painted a grimmer picture, suggesting that AI may impair performance or cognitive abilities such as critical thinking skills. One paper showed that the more a student used ChatGPT while learning, the worse they did later on similar tasks when ChatGPT wasn’t available.

In other words, early research is only beginning to scratch the surface of how this technology will truly affect learning and cognition in the long run. Where else can we look for clues? As a cognitive psychologist who has studied how college students are using AI, I have found that my field offers valuable guidance for identifying when AI can be a brain booster and when it risks becoming a brain drain.

Skill comes from effort

Cognitive psychologists have argued that our thoughts and decisions are the result of two processing modes, commonly denoted as System 1 and System 2.

The former is a system of pattern matching, intuition and habit. It is fast and automatic, requiring little conscious attention or cognitive effort. Many of our routine daily activities – getting dressed, making coffee and riding a bike to work or school – fall into this category. System 2, on the other hand, is generally slow and deliberate, requiring more conscious attention and sometimes painful cognitive effort, but often yields more robust outputs.

We need both of these systems, but gaining knowledge and mastering new skills depend heavily on System 2. Struggle, friction and mental effort are crucial to the cognitive work of learning, remembering and strengthening connections in the brain. Every time a confident cyclist gets on a bike, they rely on the hard-won pattern recognition in their System 1 that they previously built up through many hours of effortful System 2 work spent learning to ride. You don’t get mastery and you can’t chunk information efficiently for higher-level processing without first putting in the cognitive effort and strain.

I tell my students the brain is a lot like a muscle: It takes genuine hard work to see gains. Without challenging that muscle, it won’t grow bigger.

What if a machine does the work for you?

Now imagine a robot that accompanies you to the gym and lifts the weights for you, no strain needed on your part. Before long, your own muscles will have atrophied and you’ll become reliant on the robot at home even for simple tasks like moving a heavy box.

AI, used poorly – to complete a quiz or write an essay, say – lets students bypass the very thing they need to develop knowledge and skills. It takes away the mental workout.

Using technology to effectively offload cognitive workouts can have a detrimental effect on learning and memory and can cause people to misread their own understanding or abilities, leading to what psychologists call metacognitive errors. Research has shown that habitually offloading car navigation to GPS may impair spatial memory and that using an external source like Google to answer questions makes people overconfident in their own personal knowledge and memory.

Girl doing school with phone and notebook.
Learning and mastery come from effort, whether that’s done with a powerful chatbot or AI tutor or not, but educators and students need to resist outsourcing that work.
Francesco Carta fotografo via Getty Images

Are there similar risks when students hand off cognitive tasks to AI? One study found that students researching a topic using ChatGPT instead of a traditional web search had lower cognitive load during the task – they didn’t have to think as hard – and produced worse reasoning about the topic they had researched. Surface-level use of AI may mean less cognitive burden in the moment, but this is akin to letting a robot do your gym workout for you. It ultimately leads to poorer thinking skills.

In another study, students using AI to revise their essays scored higher than those revising without AI, often by simply copying and pasting sentences from ChatGPT. But these students showed no more actual knowledge gain or knowledge transfer than their peers who worked without it. The AI group also engaged in fewer rigorous System 2 thinking processes. The authors warn that such “metacognitive laziness” may prompt short-term performance improvements but also lead to the stagnation of long-term skills.

Offloading can be useful once foundations are in place. But those foundations can’t be formed unless your brain does the initial work necessary to encode, connect and understand the issues you’re trying to master.

Using AI to support learning

Returning to the gym metaphor, it may be useful for students to think of AI as a personal trainer who can keep them on task by tracking and scaffolding learning and pushing them to work harder. AI has great potential as a scalable learning tool, an individualized tutor with a vast knowledge base that never sleeps.

AI technology companies are seeking to design just that: the ultimate tutor. In addition to OpenAI’s entry into education, in April 2025 Anthropic released its learning mode for Claude. These models are supposed to engage in Socratic dialogue, to pose questions and provide hints, rather than just giving the answers.

Early research indicates AI tutors can be beneficial but introduce problems as well. For example, one study found high school students reviewing math with ChatGPT performed worse than students who didn’t use AI. Some students used the base version and others a customized tutor version that gave hints without revealing answers. When students took an exam later without AI access, those who’d used base ChatGPT did much worse than a group who’d studied without AI, yet they didn’t realize their performance was worse. Those who’d studied with the tutor bot did no better than students who’d reviewed without AI, but they mistakenly thought they had done better. So AI didn’t help, and it introduced metacognitive errors.

Even as tutor modes are refined and improved, students have to actively select that mode and, for now, also have to play along, deftly providing context and guiding the chatbot away from worthless, low-level questions or sycophancy.

The latter issues may be fixed with better design, system prompts and custom interfaces. But the temptation of using default-mode AI to avoid hard work will continue to be a more fundamental and classic problem of teaching, course design and motivating students to avoid shortcuts that undermine their cognitive workout.

As with other complex technologies such as smartphones, the internet or even writing itself, it will take more time for researchers to fully understand the true range of AI’s effects on cognition and learning. In the end, the picture will likely be a nuanced one that depends heavily on context and use case.

But what we know about learning tells us that deep knowledge and mastery of a skill will always require a genuine cognitive workout – with or without AI.

The Conversation

Brian W. Stone does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How does AI affect how we learn? A cognitive psychologist explains why you learn when the work is hard – https://theconversation.com/how-does-ai-affect-how-we-learn-a-cognitive-psychologist-explains-why-you-learn-when-the-work-is-hard-262863

40 years ago, the first AIDS movies forced Americans to confront a disease they didn’t want to see

Source: The Conversation – USA (2) – By Scott Malia, Associate Professor of Theatre, College of the Holy Cross

‘Buddies,’ which premiered on Sept. 17, 1985, cost just $27,000 to make. Vinegar Syndrome/Roe Bressan/Frameline Distribution

First it was referred to as a “mysterious illness.” Later it was called “gay cancer,” “gay plague” and “GRID,” an acronym for gay-related immune deficiency. Most egregiously, some called it “4H disease” – shorthand for “homosexuals, heroin addicts, hemophiliacs and Haitians,” the populations most afflicted in the early days.

While these names were ultimately replaced by AIDS – and later, after the virus was identified, by HIV – they reflected two key realities about AIDS at the time: a lack of understanding about the disease and its strong association with gay men.

Although the first report in the mainstream press about AIDS appeared in 1981, the first movies to explore the disease wouldn’t come for four more years.

When the feature film “Buddies” and the television film “An Early Frost” premiered 40 years ago, in the fall of 1985, AIDS had belatedly been breaking into the public consciousness.

Earlier that year, the first off-Broadway plays about AIDS opened: “As Is” by William Hoffman and “The Normal Heart” by writer and activist Larry Kramer. That summer, actor Rock Hudson disclosed that he had AIDS, becoming the first major celebrity to do so. Hudson, who died in October 1985, was a friend of President Ronald Reagan and Nancy Reagan. Reagan, who had been noticeably silent on the subject of the disease, would go on to make his first – albeit brief – public remarks about AIDS in September 1985.

Five days before Reagan’s speech, “Buddies,” an independent film made for US$27,000 and shot in nine days, premiered at the Castro Theatre in San Francisco on Sept. 12, 1985.

A film on the front lines

If you haven’t heard of “Buddies,” that’s not surprising; the film mostly played art houses and festivals before disappearing.

Its filmmaker, Arthur J. Bressan Jr., was best known for his gay pornographic films, although he’d also made documentaries such as “Gay USA.” “Buddies” would go on to reach a wider audience thanks to a 2018 video release by Vinegar Syndrome, a distribution company that focuses on restoring cult cinema, exploitation films and other obscure titles.

It was inspired by the real-life buddies program at the Gay Men’s Health Crisis, an organization Kramer co-founded. At the time, many people dying of the disease had been rejected by family and friends, so a buddy might be the only person who visited a terminal AIDS patient.

The film feels like a play, in that most of the movie takes place in a single room and features just two characters: a naive young gay man named David and a young AIDS patient named Robert. Over the course of the film, the characters open up about their lives and their fears about the growing epidemic. It also includes a sex scene – something other early AIDS films completely avoided – in which David and Robert engage in safer sex.

AIDS packaged for the masses

The remarkably frank and intimate approach to the epidemic in “Buddies” contrasts sharply to the television film “An Early Frost,” which premiered on NBC on Nov. 11, 1985.

The film’s protagonist is a successful Chicago lawyer named Michael who hasn’t come out to his family, much to the distress of his long-term partner, Peter. When Michael finds out he has AIDS, he’s forced to come out to his parents, both as gay and as having AIDS.

Much of the film deals with Michael’s self-acceptance and his attempts to mend his relationships. Yet the production of “An Early Frost” was fraught with concerns about depicting both homosexuality and AIDS. Unlike David and Robert, Michael and Peter show no physical affection – they barely touch each other.

A promotional clip for ‘An Early Frost,’ which drew 34 million viewers when it premiered on NBC.

Knowledge of AIDS was still evolving – a test for HIV was approved in March 1985 – so screenwriters and life partners Daniel Lipman and Ron Cowen went through 13 revisions of the script. The real-life fears and misconceptions about how AIDS could and could not be transmitted were central to the storyline, adding extra pressure to be accurate in the face of evolving understanding of the virus.

Despite losing NBC $500,000 in advertisers, “An Early Frost” drew 34 million viewers and was showered with Emmy nominations the following year.

A quilt of stories emerges

“Buddies” and “An Early Frost” opened up AIDS and HIV as subject matters for film and television.

They begat two lanes of HIV storytelling that continue to this day.

The first is an approach geared to mainstream audiences that tends to avoid controversial issues such as sex or religion and instead focuses on characters who grapple with both the illness and the stigma of the virus.

The second is an indie approach that’s often more confrontational, irreverent and angry at the injustice and indifference AIDS patients faced.

The former approach is seen in 1993’s “Philadelphia,” which earned Tom Hanks his first Oscar. The critically and commercially successful film shares a number of story points with “An Early Frost”: Hanks’ character, a big-city lawyer, finds out he is HIV positive and must confront bias head-on. HIV also features prominently in later films such as “Precious” (2009) and “Dallas Buyers Club” (2013), both of which, like “Philadelphia,” became awards darlings.

The edgier, more critical approach can be seen in the New Queer Cinema movement of the 1990s, a film movement that developed as a response to the epidemic. Gregg Araki’s “The Living End” (1992) is a key film in the movment: It tells the story of two HIV-positive men who become pseudo-vigilantes in the wake of their diagnoses.

In ‘The Living End,’ the HIV-positive protagonists go on a hedonistic rampage to take out their anger at the world.

Somewhere in between is “Longtime Companion” (1990), which was the first film about AIDS to receive a wide release and tracks the impact of the epidemic on a fictional group of gay men throughout the 1980s. The film was written by gay playwright and screenwriter Craig Lucas and directed by Norman Rene, who died of AIDS six years after the film’s release.

Studios still leery

In many ways, television is where the real breakthroughs have happened and continue to happen.

The first television episode to deal with AIDS appeared on the medical drama “St. Elsewhere” in 1983; AIDS was also the subject of episodes in the sitcoms “Mr. Belvedere,” “The Golden Girls” and “Designing Women.” “Killing All the Right People” was the title of the latter’s special episode – a phrase the show’s writer and co-creator Linda Bloodworth-Thomason heard while her mother was being treated for AIDS.

More recently, producer Ryan Murphy has made a cottage industry of representations of queer people, particularly those with HIV. His stage revivals of “The Normal Heart” and Mart Crowley’s 1968 play “The Boys in the Band” were later adapted into films for television and streaming. He also produced “Pose,” a three-season series about drag ball culture in the 1980s that stars queer characters of color, several of whom are HIV positive.

Yet for all of these strides, representations of HIV in film are still hard to come by. In fact, out of the 256 films released by major distributors in 2024, the number of HIV-postive characters amounted to … zero.

Perhaps movie studios are less willing to risk even a character with HIV given the drop in movie theater attendance in the age of streaming.

If you think it’s an exaggeration to suggest that people might not want to be seen going to the theater to watch a film about characters with HIV, the results of a 2021 GLAAD survey may surprise you.

It found that the stigma around HIV is still very high, particularly for HIV-positive people working in schools and hospitals. One-third of respondents were unaware that medication is available to prevent the transmission of HIV. More than half didn’t know that HIV-positive people can reach undetectable status and not transmit the virus to others.

Another important finding from the survey: Only about half of the nonqueer respondents had seen a TV show or film about someone with HIV.

This reflects both the progress made since “Buddies” and “An Early Frost” and also why these films still matter today. They were released at a time when there was almost no cultural representation of HIV, and misinformation and disinformation were rampant. There have been so many advances, in both the treatment of HIV and its visibility in popular culture. That visibility still matters, because there’s still much more than can be done to end the stigma.

The Conversation

Scott Malia does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. 40 years ago, the first AIDS movies forced Americans to confront a disease they didn’t want to see – https://theconversation.com/40-years-ago-the-first-aids-movies-forced-americans-to-confront-a-disease-they-didnt-want-to-see-262421

Doctors are joining unions in a bid to improve working conditions and raise wages in a stressful health care system

Source: The Conversation – USA (3) – By Patrick Aguilar, Managing Director of Health, Washington University in St. Louis

Dr. Maryssa Miller speaks to fellow union members outside George Washington University Hospital in Washington, D.C., in 2024. Maansi Srivastava/The Washington Post via Getty Images

The share of doctors who belong to unions is rising quickly at a time when organized labor is losing ground with other professions. The Conversation U.S. asked Patrick Aguilar, a Washington University in St. Louis pulmonologist and management professor, to explain why the number of physicians joining unions is growing – a trend that appears likely to continue.

How long have there been health care unions?

U.S. nurses first joined labor unions in 1896. Today, about 1 in 5 registered nurses are union members, twice the rate of unionization in all professions.

The first physicians’ union formed in 1934, when hospital residents – doctors in training who tended then, as now, to be paid relatively little and forced to work long hours – organized to demand higher pay and shorter shifts. For the next eight decades, those unions grew slowly.

But the pace has picked up. The share of doctors who belong to unions rose from 5.7% in 2014 to 7.2% in 2019. By 2024, an estimated 8% of physicians were union members.

This swift growth contrasts with declining union membership overall. The share of American workers in unions fell by more than half, from 20.1% to 9.9%, between 1983 and 2024.

Residents and interns are particularly interested in joining unions. Nearly 2 in 3 have said they might want to join one. Membership in the Committee of Interns and Residents, a chapter of Service Employees International Union, rose by nearly 14% to 37,000 between late 2024 and early 2025. By September 2025, the union was saying that its ranks had grown to more than 40,000.

Several other U.S. unions also represent physicians. Doctors Council, which is also affiliated with Service Employees International Union, represents physicians, dentists, optometrists, podiatrists and veterinarians. The Union of American Physicians and Dentists, part of the American Federation of State, County and Municipal Employees, says it has at least 7,000 members.

Aren’t doctors too rich for labor organizing?

Just like labor unions that represent electricians or teachers, unions that represent doctors seek better working conditions, higher pay and better benefits for their members. While the typical U.S. doctor earns nearly US$240,000 a year, about four times what the typical American worker makes, their compensation varies widely depending on their medical specialty. A pediatric surgeon, for example, can earn twice as much as a pediatrician.

Despite their high wages, according to a poll of over 1,000 physicians, as many as 15% of physicians said they had cut back on their personal expenses, and 40% expected to delay retirement for financial reasons. The education and training required to become a doctor is lengthy and expensive, often leading to large amounts of student debt.

Additionally, many physicians are compensated for patient visits and not for work done outside of the exam room. The extra hours needed to document work, address patient concerns and maintain continuing education are often uncompensated, significantly reducing physicians’ effective hourly earnings.

Other unions advocate for higher wages and better conditions in well-compensated professions.

The National Football League Players Association is an example of a union with highly paid members that still advocates for their increased compensation. NFL players now earn a median salary of $860,000.

Baseball players earn even more. They have a median salary of $1.35 million, and all of the players are represented by the Major League Baseball Players Association, a union.

A medical worker looks dejected.
Many doctors are experiencing more stress due to relatively recent workplace changes.
Juanmoni/E+ via GettyImages

Why would doctors join unions?

An American Medical Association survey conducted in 1983 found that 75.8% of physicians were owners of their primary clinical practice. Four decades later, nearly 80% of physicians are employed by health care systems or other corporations.

As employees, physicians are now eligible to unionize and may have an interest in doing so to bargain with employers who set working conditions and compensation.

Residents and fellows, on the other hand, have been employees for much longer because of the structure of their training programs. Residents work longer hours, are paid significantly less and are obligated to complete their training programs in order to attain specialty certification.

These differences help explain the longer history of labor organizing for physician trainees.

Surveys point to several other possible causes besides concerns about employers.

An American Medical Association survey of 13,000 physicians, nurse practitioners and physician assistants in 2022 reflected rates of burnout exceeding 50% in several key specialties. More than half of those responding said they felt undervalued by their employer.

In 2023, the University of Michigan’s Center for Health and Research Transformation surveyed over 29,000 Michigan physicians. About 85% of them said administrative and regulatory requirements were a significant source of workplace stress.

The widespread adoption of electronic health records over the past 25 years, which has improved some aspects of medical diagnosis and treatment, has also given doctors more administrative responsibilities. Doctors spend nearly two additional hours updating electronic health records or doing related administrative tasks for every hour they spend with patients, according to one estimate.

Keeping the records up to date can contribute to burnout.

A doctor is flanked by computer screens and working on a laptop.
Many doctors say that they spend twice as much time dealing with electronic health records as they do with their own patients.
Ariel Skelley/DigitalVision via Getty Images

Are doctors worried about job security?

In recent years, nurse practitioners and physician assistants have taken on responsibilities previously reserved for doctors. Nurse practitioners or physician assistants saw patients for about 1 in 4 medical appointments, according to a 2023 study, up from around 1 in 5 a decade earlier.

Given significant differences in compensation between physicians and other kinds of health care providers, this trend raises concerns about the potential for health care employers to employ fewer doctors to save money on staff salaries.

Separately, there are growing concerns about the potential for the use of artificial intelligence and automation to replace some of the tasks that doctors do today.

Can labor organizing harm patients?

In April 2025, the American College of Physicians, which has 160,000 members, released a position paper with recommendations for responsible collective bargaining for doctors.

This group felt compelled to encourage the ethical engagement of its members in the midst of labor organizing because their work is often lifesaving and can be dangerous to disrupt due to strikes or other labor actions.

No study has empirically evaluated whether a doctor’s union membership affects their patients’ health. However, a 2022 meta-analysis of 17 studies found no significant impact on death rates when health care workers go on strike.

Despite the potential benefits, some doctors remain concerned that unionization may create divides among physicians, interfere with their ability in some cases to negotiate directly with their employers, and add layers of bureaucracy that don’t do patients or medical professionals any good.

Do doctors ever go on strike?

It’s historically been rare in the U.S., but that could be changing.

In January 2025, 70 doctors who belong to the Pacific Northwest Hospital Medicine Association joined thousands of nurses in a strike against Portland, Oregon-based Providence Health after more than a year of failed contract negotiations.

The strike lasted 27 days, delaying some elective procedures and making some emergency room wait times longer. Some patients had to go to other hospitals. The agreement the hospital ultimately reached with physicians boosted pay, expanded sick leave and included a commitment to change staffing models.

In June 2025, picket lines formed outside of four Minnesota health clinics for the first time in the state’s history. Members of the Doctors Council SEIU union were protesting after more than 18 months of failed negotiations for a new contract. The doctors, who all work for the Allina Health chain of hospitals, health clinics and urgent care sites, are seeking higher compensation, smaller workloads and more support staff.

Although no timeline has been announced, union members have authorized a strike if negotiations continue to fail. As of early September 2025, those negotiations were ongoing.

The Conversation

Patrick Aguilar does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Doctors are joining unions in a bid to improve working conditions and raise wages in a stressful health care system – https://theconversation.com/doctors-are-joining-unions-in-a-bid-to-improve-working-conditions-and-raise-wages-in-a-stressful-health-care-system-259232

Sacred texts and ‘little bells’: The building blocks of Arvo Pärt’s musical masterpieces

Source: The Conversation – USA (3) – By Jeffers Engelhardt, Professor of Music, Amherst College

For years, Arvo Pärt has been one of the most performed contemporary classical composers in the world. Calle Hesslefors/ullstein bild via Getty Images

The Estonian composer Arvo Pärt, who turns 90 on Sept. 11, 2025, is one of the most frequently performed contemporary classical composers in the world. Beyond the concert stage and cathedral choir, Pärt’s music features heavily in film and television soundtracks: “There Will Be Blood,” “Thin Red Line” or “Wit,” for instance. It is often used to evoke profound emotions and transcendent spirituality.

Many Estonians grew up hearing the music Pärt wrote for children’s films and Estonian cinema classics in the 1960s and ‘70s. Popes and Orthodox patriarchs honor him, and Pärt’s music has received the highest levels of recognition, including Grammy Awards. In 2025, Pärt is being celebrated in Estonia, at Carnegie Hall and around the world.

Behind much of Pärt’s popularity – and his listeners’ devotion – is his engagement with sacred Christian texts and Orthodox Christian spirituality. Yet his music has inspired a broad range of artists and thinkers: Icelandic singer Björk, who admires its beauty and discipline; the theater artist Robert Wilson, who was drawn to its quality of time; and Christian theologians, who appreciate its “bright sadness.”

As a music scholar with expertise in Estonian music and Orthodox Christianity, and a longtime Pärt fan, I am fascinated by how Pärt’s exploration of Christian traditions – at once subtle and fervent – appeals to so many. How does this happen musically?

A large, airy and modern-looking sanctuary with a few dozen people in the pews as performers sit behind the altar.
A rehearsal of Arvo Pärt’s ‘Fratres’ in St. Martin Church in Idstein, Germany, in 2023.
Gerda Arendt via Wikimedia Commons

Tintinnabuli

Pärt emerged from a period of personal artistic crisis in 1976. In a now-legendary concert, he introduced the world to new music composed using a technique he invented called “tintinnabuli,” an onomatopoeic Latin word meaning “little bells.”

Tintinnabuli is music reduced to its elemental components: simple melodic lines derived from sacred Christian texts or mathematical designs and married to basic harmonies. As Pärt describes it, tintinnabuli is the benefit of reduction rather than complexity – freeing the elemental beauty of his music and the message of his texts.

This was a departure from Pärt’s earlier modernist and experimental music, and expressed a yearslong struggle to reconcile his newfound commitment to Orthodox Christianity and his rigorous artistic ideals. Pärt’s journey is documented in the dozens of notebooks he kept, beginning in the 1970s: religious texts, diary entries, drawings and ideas for musical compositions – a documentary trove of Christian musical creativity.

Tintinnabuli was inspired, in part, by Pärt’s interest in much earlier styles of Christian music, including Gregorian chant – the single-voice singing of Roman Catholicism – and Renaissance polyphony, which weaves together multiple melodic lines. Because of its associations with the church, this music was ideologically fraught in an anti-religious Soviet Estonia.

In Pärt’s notebooks from the 1970s, there are pages and pages of musical sketches where he works out early music-inspired approaches to texts and prayers – the seeds of tintinnabuli. The technique became his answer to existential creative questions: How can music reconcile human subjectivity and divine truths? How can a composer get out of the way, so to speak, to let the sounds of sacred texts resonate? How can artists and audiences approach music so that, to use Pärt’s famous expression, “every blade of grass has the status of a flower”?

In a 2003 conversation with the Italian musicologist Enzo Restagno, Pärt’s wife, Nora, offered an equation to understand how tintinnabuli works: 1+1 = 1.

The first element – the first “1” – is melody, as singer and conductor Paul Hillier lays out in his 1997 book on Pärt. Melody expresses a subjective experience of moving through the world. It centers around a given musical note: the “A” key on the piano, for instance.

The second element – the “+1” – is tintinnabuli itself: the presence of three pitches, sounding together as a bell-like halo: A, C, E.

Finally, the third element – the “= 1” – is the unity of melodic and tintinnabuli voices in a single sound, oriented around a central musical note.

Formulas

Here’s the crux of Arvo Pärt’s work: the relationship of 1+1, melody and harmony, is ordered not by moment-to-moment choices, but by formulas meant to magnify the sound and structure of sacred texts.

A simple tintinnabuli formula might go like this: If the melody rises four notes with four syllables of text, the notes of the tintinnabuli triad will follow beneath that line without overlapping. It supports and steers. Or if the melody falls five notes with five syllables of text, the notes of the tintinnabuli triad will alternate above and below that line to create a different musical texture – all organized around symmetry.

‘Spiegel im Spiegel,’ or ‘Mirror in the Mirror,’ is a classic example of Arvo Pärt’s tintinnabuli style.

Pärt often lets the number of syllables in a word, the length of a phrase or verse, and the sound of a language shape his formulas. That is why Pärt’s music in English, with its many single-syllable words, consonant clusters and diphthongs, sounds one way. And that is why his music in Church Slavonic, the liturgical language for many Orthodox Christians, sounds another way.

Tintinnabuli is about simplicity and beauty. The genius of Pärt’s work is how his formulas feel like the musical expression of timeless truths. In a 1978 interview with the journalist Ivalo Randalu, Nora Pärt recalled what her husband once said about tintinnabuli’s formulas: “I know a great secret, but I know it only through music, and I can only express it through music.”

Silence

If this all seems coldly formulaic, it isn’t. There is a sensuousness to Arvo Pärt’s tintinnabuli music that connects with listeners’ bodily experience. Pärt’s formulas, born out of long, prayerful periods with sacred texts, offer beauty in the warmth and friction of relationships: melody and tintinnabuli, word and the limits of language, sounds and silence.

“For me, ‘silent’ means the ‘nothing’ from which God created the world,” Pärt told the Estonian musicologist Leo Normet in 1988. “Ideally, a silent pause is something sacred.”

‘Tabula rasa’ was written in 1977, just after Arvo Pärt had introduced the world to his ‘tintinnabuli’ technique.

Silence is a common trope in Pärt’s music – indeed, the second movement of his tintinnabuli masterpiece “Tabula rasa,” the title work on the 1984 ECM Records release that brought him to global attention, is “Silentium.”

Any sounding music is not silent, of course – and, in human terms, silence is largely metaphorical, since we cannot escape sound into the silence of absolute zero or a vacuum.

But Pärt’s silence is different. It is spiritual stillness communicated through his musical formulas but made sensible through the action of human performers. It is a composer’s silence as he gets out of the way of a sacred text’s musicality to communicate its truth. Without paradox, Pärt’s popularity today may well arise from the silence of his music.

The Conversation

Jeffers Engelhardt does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Sacred texts and ‘little bells’: The building blocks of Arvo Pärt’s musical masterpieces – https://theconversation.com/sacred-texts-and-little-bells-the-building-blocks-of-arvo-parts-musical-masterpieces-261519

Two seventh-century people found with west African ancestry – a story of diversity and integration in early Anglo-Saxon society

Source: The Conversation – UK – By Duncan Sayer, Professor in Archaeology, University of Lancashire

In 2022, archaeologists worked on the ancient DNA from a number of early medieval cemeteries, and found two individuals that stood out. One was from Updown Eastry in Kent, known as Updown girl, and the other was a young man from Worth Matravers in Dorset. Both were dated to the 7th century and both appeared to have west African heritage.

Two recent papers on these findings, along with other discoveries, highlight that English people from this time with west African heritage spanned generations and social status. The burials of these individuals also show that they were integrated into their respective communities. For example, Updown girl was buried next to her maternal relatives.

As a result, the presence of African heritage should not be a surprise. Early medieval society was much richer and more globally connected than most people believe.

Updown Eastry is a cemetery associated with the early Anglo-Saxon Kentish elite and part of a royal network. Updown girl was aged between 11 and 13 at her time of death and was buried around the middle of the seventh century.

An analysis of her autosomal DNA (which derives from both parents) found she was 67% continental northern European and 33% west African – most closely related to modern-day Esan and Yoruba populations in Nigeria. One of her great grandparents was 100% west African. Some of her maternal relatives were buried close by and their ancestry derived from northern Europe.

The second burial was of a young man aged between 17 and 25 at the time of his death. He was found in a grave with an unrelated adult male in a small cemetery that was in use for around 100 years, with his burial dated between AD605 to AD650.

Analysis of the site shows that the burial population had predominantly (77%) western British and Irish ancestry. Worth Matravers contained four primary family groups mostly related along the maternal line, suggesting a degree of matrilocality (where women remain after marriage) within this community. The young man also stood out because his Y-chromosome DNA was consistent with west African ancestry (25%) coming from his grandfather.

Some, modern ideas of medieval England paint it as an insular place with little or no diversity. However, England was much more connected to the rest of the world and its society was, as a result, much less homogenous than we imagine. Some early Anglo Saxon’s had brown eyes and African Ancestors.

Finds connecting Britain to the world

Royal burials like that at Sutton Hoo, Suffolk, and Prittlewell, Essex, contained objects from far afield, including Byzantine silver bowls and a jug from the eastern Mediterranean.

Amethysts and garnets have been found in seventh century jewellery and these stones were mined in Sri Lanka and India. Analysis of loop-like bag openings found in female graves from the fifth to seventh century revealed that these were made from African elephant ivory.

The Byzantium reconquest of north Africa in AD634 to AD635 provided new sources of sub-Saharan gold. In the west of Britain, fragments of red slip ware (distinctive Byzantium amphora vessels or pottery) have been found at sites associated with elites, like Tintagel in northern Cornwall. There is also evidence of glass beads made in early medieval England being found in contemporary Tanzania.

The newly emerging elite of seventh century England were looking east and were building new ideas about governance derived from old or far-flung places. Christianity, for instance, came from Rome, part of Byzantium.

There were also historical references to people from the African continent known to be part of society at the time. For instance, in the late seventh century, the African Abbot, Hadrian, joined Archbishop Theodore in Canterbury. And later in the 10th century, an Old English vernacular verse from Exodus described “the African maiden on the ocean’s shore, adorned with gold”.

While we cannot rule out the possibility that the ancestors of Updown Girl and the young man from Worth Matravers had been slaves, we must also be careful of interpreting the evidence though a post-colonial bias. The closer we look, the richer and more complex the connections between Britain, Byzantium and Africa are.

We do not know if these Africans were slaves, but we do know that early medieval slaves would have included western British, Frankish and Anglo-Saxon people too.

At a royal centre like Eastry in Kent, many accents might be found as well as different ways of wearing clothing. These places contained well-travelled people connected via family and marriage. DNA and isotopic studies also show that movement for marriage was common among early medieval elite women, who married into wealthy families, particularly in the east of Britain. So, we must also consider other possibilities alongside slavery, include religion, trading, travelling, marriage and seafaring.

Indeed the difference between Updown Eastry, an elite site, and Worth Matravers, a small coastal community, is critical to understanding the range of possibilities. African ancestry is found at both ends of the social spectrum and in the east and west of England.

Though England was more diverse than we think, life was not easy and, like these two examples, people died young. As well as disease, death by violence was also known – the weapons we find in early medieval graves were displayed as well as being functional objects.

DNA and cemetery evidence points to the importance of kinship and family for survival. These units provided shelter, protection, food and care. The evidence suggests that both of these African descendants were fully integrated into their respective communities sharing family ties and even the grave.


Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


The Conversation

Duncan Sayer does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Two seventh-century people found with west African ancestry – a story of diversity and integration in early Anglo-Saxon society – https://theconversation.com/two-seventh-century-people-found-with-west-african-ancestry-a-story-of-diversity-and-integration-in-early-anglo-saxon-society-263375

Environmental pressures need not always spark conflict – lessons from history show how crisis can be avoided

Source: The Conversation – UK – By Jay Silverstein, Senior Lecturer in the Department of Chemistry and Forensics, Nottingham Trent University

Afghan farmers plough a field while US soldiers patrol in Helmand province in 2010.
ResoluteSupportMedia / Flickr, CC BY-ND

The expectation that competition for dwindling resources drives societies towards conflict has shaped much of the discourse around climate change and warfare. As resources become increasingly vulnerable to environmental fluctuations, climate change is often framed as a trigger for violence.

In one study from 2012, German-American archaeologist Karl Butzer examined the conditions leading to the collapse of ancient states. Among the primary stressors he identified were climate anxieties and food shortages.

States that could not adapt followed a path towards failure. This included pronounced militarisation and increased internal and external warfare. Butzer’s model can be applied to collapsed societies throughout history – and to modern societies in the process of dissolution.


Wars and climate change are inextricably linked. Climate change can increase the likelihood of violent conflict by intensifying resource scarcity and displacement, while conflict itself accelerates environmental damage. This article is part of a series, War on climate, which explores the relationship between climate issues and global conflicts.


Bronze age aridification in Mesopotamia from roughly 2200BC to 2100BC, for example, is correlated with an escalation of violence there and the collapse of the Akkadian empire. Some researchers also attribute drought as a major factor in recent wars in east Africa.

There is a wide consensus that climatic stress contributes to regional escalations of violence when it has an impact on food production. Yet historical evidence reveals a more complex reality. While conflict can arise from resource scarcity and competition, societal responses to environmental stress also depend on other factors – including cultural traditions, technological ingenuity and leadership decisions.

The temptation to draw a direct correlation between climate stress and war is both reductionist and misleading. Such a perspective risks surrendering human agency to a deterministic “law of nature” – a law to which humanity need not subscribe.

Catalysing transformation

In the first half of the 20th century, researchers grappled with the Malthusian dilemma: the fear that population growth would outpace the environment’s carrying capacity. The reality of this dynamic has contributed to the collapse of certain civilisations around the world.

These include the Maya and Indus Valley civilisations in Mesoamerica and south Asia respectively. The same applies to the Hittite in what is now modern-day Turkey and the Chaco Canyon culture in the US south-west.

Civilisations affected by climate stress:

A table documenting examples of civilisations affected by climate stress.
Many civilisations have been affected by climate stress in the past.
Jay Silverstein, CC BY-NC-ND

However, history is equally rich with examples of societies that have successfully averted crisis through innovation and adaptation. From the dawn of agriculture (10,000BC) onward, human ingenuity has consistently expanded the boundaries of environmental possibility. It has also intensified the means of food production.

Irrigation systems, efficient planting techniques and the selective breeding of crops and livestock enabled early agricultural societies to flourish. In Roman (8th century BC to 5th century AD) and early medieval Europe (5th to 8th centuries AD), the development of iron ploughshares revolutionised soil cultivation. And water-lifting technologies – from the Egyptian shaduf to Chinese water wheels and Persian windmills – expanded arable land and intensified production.

In the 19th century, when Europe’s population surged and natural fertiliser supplies such as guano became strained, the Haber-Bosch process revolutionised agriculture by enabling nitrogen to be extracted from the atmosphere. This allowed Europe to meet its growing demand for food and, incidentally, munitions.

Danish economist Esther Boserup’s work from 1965, The Conditions of Agricultural Growth, challenged the Malthusian orthodoxy. It demonstrated that population pressure can stimulate technological innovation. Boserup’s insights remain profoundly relevant today.

As humanity confronts an escalating environmental crisis driven by global warming, we stand at another historic inflection point. The reflexive response to climate stress – political instability and conflict – should be challenged by a renewed commitment to adaptation, cooperation and innovation.

The measuring shaft of a nilometer.
A nilometer, which was used to gauge the optimal time to open agricultural canals in ancient Egypt.
Baldiri / Wikimedia Commons, CC BY-NC-SA

Dwindling military superiority

There are many examples of societies successfully overcoming environmental threats. But our history is also full of failed civilisations that more often than not suffered ecological catastrophe.

In many cases, dwindling resources and the lure of wealth in neighbouring societies contributed to invasion and military confrontation. Droughts have been implicated in militaristic migration in central Asia, such as the westward movement of the Huns and the southward push of the Aryans.

Asymmetries in military power can encourage or deter conflict. They offer opportunities for reward or impose strategic constraints. And while military superiority has largely shielded the wealthiest nations in the modern era, this protection may erode in the foreseeable future.

Natural disasters that erode security infrastructure are becoming increasingly frequent and severe. In 2018, for example, two hurricanes caused a combined US$8.3 billion (£6.2 billion) in damage to two military facilities in the US. There has also been a proliferation of inexpensive military technologies like drones.

Both of these developments could create new opportunities to challenge dominant powers. Under such conditions, increases in military conflict should be expected in the coming decades.

In my view, dramatic action must be taken to avoid a spiral of conflict. Ideals, knowledge and data should be translated into political and economic will. This will require coordinated efforts by every nation.

The growth of organisations such as the Center for Climate and Security, a US-based research institute focused on systemic climate and ecological security risks, signals movement in the right direction. Yet such organisations face a steep climb in their efforts to translate geopolitical climate issues into meaningful political action.

One of the main barriers is the rise of anti-intellectualism and populist politics. Often aligned with unregulated capitalism, this can undermine the very strategies needed to address the unfolding crisis.

If we are to avoid human tragedy, we will need to transform our worldview. This requires educating those unaware of the causes and consequences of global warming. It also means holding accountable those whose greed and lust for power have made them adversaries of life on Earth.

History tells us that environmental stress need not lead to war. It can instead catalyse transformation. The path forward lies not in fatalism, but in harnessing the full spectrum of human creativity and resilience.

The Conversation

Jay Silverstein does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Environmental pressures need not always spark conflict – lessons from history show how crisis can be avoided – https://theconversation.com/environmental-pressures-need-not-always-spark-conflict-lessons-from-history-show-how-crisis-can-be-avoided-262300

International law isn’t dead. But the impunity seen in Gaza urgently needs to be addressed

Source: The Conversation – UK – By James Sweeney, Professor, Lancaster Law School, Lancaster University

Philippe Lazzarini, the commissioner general of the United Nations Relief and Works Agency for Palestine Refugees (Unrwa) says that Gaza is “becoming the graveyard of international humanitarian law”.

International humanitarian law (IHL), regulates the conduct of armed conflict, which is the legal expression for war. It covers everything from what is a lawful target, to the treatment of prisoners and injured people, and even to the testing of new weapons. The main rules of IHL can be found in the Geneva Conventions of 1948.

Lazzarini, though, has gone so far as to say that we “have made the Geneva convention[s] almost irrelevant. What is happening and being accepted today in Gaza is not something that can be isolated; it will become the new norm for all future conflicts”.

There can be no doubt that the situation in Gaza is dire. There is plausible evidence of the Israeli military carrying out war crimes there in its military operation triggered by, and commenced soon after, the devastating attack by Hamas against Israel on October 7 2023. The Hamas attack itself involved the commission of war crimes – and so does its taking of hostages and the subsequent treatment of them in captivity. But to say that all these atrocities render the law irrelevant is premature.

There are several reasons for this. One is that there is a difference between the existence of an important rule and its enforcement. Even where a rule is not being enforced, international law gives us a precise language to articulate exactly what is wrong with the situation. I recently wrote that what appears to have been a deliberate “double tap” attack against Nasser hospital in Khan Younis, northern Gaza, on August 25, violated IHL and can be seen as a war crime.

I have also written that other Israeli operations in Gaza amount to a crime against humanity, as they are part of a widespread or systematic attack against a civilian population. I, and others, have seriously contemplated the idea that a genocide is under way.

These legal expressions are important, and to accuse anyone of perpetrating the crimes that they embody has very serious political consequences. That is why, however implausibly, states like Israel and Russia have tried to maintain that they are totally compliant with international law.

Enforcement and impunity

Returning to the issue of enforcement, it is important to recognise that there are in fact several legal and political forums that provide an opportunity for it. These include the organs of the UN, including the International Court of Justice. There are also International Criminal Court arrest warrants for key leaders in respect of the events in both Gaza and Ukraine.

States that are signed up to the International Criminal Court are meant to be under an obligation to arrest people who are wanted by it. Yet, several opportunities to arrest Vladimir Putin have been spurned, like when he visited Mongolia recently. Likewise, Hungary failed to arrest the Israeli prime minister, Benjamin Netanyahu, when he visited earlier this year (Hungary has since denounced the court).

It’s debatable whether either of them will ever face trial. But the arrests warrants have already had political consequences. Putin was unable to attend the Brics summit in South Africa in 2023 because that country recognises the ICC. There were mixed reactions internationally to the news of the warrant against Netanyahu, with some affirming their support or at least their intention to comply with the warrants if necessary.

But history tells us that leaders who once seemed untouchable have eventually faced justice in one form or another.

Did the surviving leading Nazis ever expect to go on trial at a hastily convened military tribunal in Nuremberg? Did Augusto Pinochet expect that he would die under house arrest in his native Chile, facing trial for his actions during and since the military coup of 1973? Or that Saddam Hussein would face the death penalty and be hanged for his crimes in Iraq? Or that Libya’s Muammar Gaddafi would be ousted, abused, and then killed by a militia? Probably not.

A fair trial at the ICC would be preferable to most of those examples.

Justice for violations of international humanitarian law clearly needs to be seen to be done – if we don’t want Lazzarini’s catastrophic prediction to become a reality.

The Conversation

James Sweeney does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. International law isn’t dead. But the impunity seen in Gaza urgently needs to be addressed – https://theconversation.com/international-law-isnt-dead-but-the-impunity-seen-in-gaza-urgently-needs-to-be-addressed-264520

Deadlier than varroa, a new honey-bee parasite is spreading around the world

Source: The Conversation – Global Perspectives – By Jean-Pierre Scheerlinck, Honorary Professor Fellow, Melbourne Veterinary School, Faculty of Science, The University of Melbourne

Albert Stoynov/Unsplash

For decades, beekeepers have fought a tiny parasite called Varroa destructor, which has devastated honey-bee colonies around the world. But an even deadlier mite, Tropilaelaps mercedesae – or “tropi” – is on the march. Beekeepers fear it will wreak even greater havoc than varroa – and the ripple effects may be felt by the billions of people around the world who rely on honey bee-pollinated plants.

From Asia to Europe

Tropi’s natural host is the giant honey-bee (Apis dorsata), common across South and Southeast Asia. At some point, the mite jumped to the western honey-bee (Apis mellifera), the species kept by beekeepers around the world. Because this host is widespread, the parasite has steadily moved westwards.

It has now been detected in Ukraine, Georgia and southern Russia, and is suspected to be in Iran and Turkey. From there, it is expected to enter eastern Europe, then spread across the continent. Australia and North America are also at risk.

Why tropi spreads so fast

Like varroa, tropi is a tiny mite that breeds inside capped brood cells, the life stages of the honey-bee when the late larvae and pupae develop inside honeycomb cells that are sealed by a layer of wax. The mite feeds on bee pupae and transmits lethal viruses, such as deformed wing virus – the deadliest of the bee viruses. But there are crucial differences.

Varroa can survive on adult bees for long periods, but tropi cannot. Outside brood cells, it lives only a few days, scurrying across the comb in search of a new larva.

Because tropi spends more time in capped cells, it reproduces quickly. A capped cell that contains a female varroa will result in one or two mated varroa offspring emerging with the adult bee. Tropi offspring develop faster inside a capped cell than varroa offspring, so a tropi “mother” may result in more offspring emerging than a varroa infested cell, more quickly overwhelming the colony.

As a result, colonies infested with tropi can collapse far faster than those plagued by varroa.

Small white insect larvae with brown parasites attached.
Tropi is a tiny mite that feeds on honey-bee pupae and transmits lethal viruses.
Denis Anderson/CSIRO

Current control methods

In parts of Asia where the parasite is already established, small-scale and commercial beekeepers often manage it by caging the queen for about five weeks.

With no eggs being laid, no brood develops, leaving the mites without a food source. This method is practical where beekeepers manage dozens of hives, but not in places like Europe where commercial operations often involve thousands.

Another option is treating the beehive with formic acid, which penetrates brood cell caps and kills the mite without necessarily harming the developing bee, provided concentrations are kept low. This treatment may offer beekeepers a practical tool.

Why varroa treatments won’t work

Many wonder whether the chemicals used against varroa could also fight tropi. The answer is, mostly no.

Varroa spends much of its life outside of a capped cell clinging to adult bees, where it comes into contact with mite-killing chemicals known as miticides spread through the colony on bee bodies. By contrast, tropi rarely attaches to adults, instead darting across comb surfaces.

Because of this, it is far less exposed to chemical residues. Treatments designed for varroa are often ineffective against the faster-breeding tropi.

Managing both mites together will be particularly difficult. Combining treatments risks harming colonies or contaminating honey. For instance, formic acid for tropi and insecticides such as amitraz for varroa might interact at even low levels, killing the bees as well as the parasites.

There is also the danger of resistance. Over-use of varroa treatments has already produced resistant strains, reducing the effectiveness of several once-reliable chemicals. Introducing more compounds to fight tropi, without careful integrated pest management, could accelerate this process and leave beekeepers with few effective tools.

A brown and yellow beehive.
Bee colonies infested with tropi can collapse far faster than those plagued by varroa.
Nick Pitsas/CSIRO

The wider impact

The spread of tropi will not only devastate beekeepers but also agriculture more broadly. Honey-bees are critical pollinators of many crops. Heavier hive losses will raise costs for both honey production and pollination services, affecting food prices and availability.

Research is underway in countries such as Thailand and China to develop better management strategies. But unless effective and practical treatments are found soon, the spread of this new mite around the world could be catastrophic.

The story of varroa shows how quickly a single parasite can transform global beekeeping. Tropi has the potential to be even worse: it spreads faster, kills colonies more quickly, and is harder to control with existing methods.


The author would like to acknowledge the contribution of Robert Owen, a beekeeper who completed a PhD on the varroa mite at the University of Melbourne in 2022, to this article.

The Conversation

Jean-Pierre Scheerlinck does work for the CIS, drafting the Australian Pollination Security Status Report. He has received funding from the ARC and NHMRC.

ref. Deadlier than varroa, a new honey-bee parasite is spreading around the world – https://theconversation.com/deadlier-than-varroa-a-new-honey-bee-parasite-is-spreading-around-the-world-264891

Do I have insomnia? 5 reasons why you might not

Source: The Conversation – Global Perspectives – By Amelia Scott, Honorary Affiliate and Clinical Psychologist at the Woolcock Institute of Medical Research, and Macquarie University Research Fellow, Macquarie University

Oleg Breslavtsev/Getty

Even a single night of sleep trouble can feel distressing and lonely. You toss and turn, stare at the ceiling, and wonder how you’ll cope tomorrow. No wonder many people start to worry they’ve developed insomnia.

Insomnia is one of the most talked-about sleep problems, but it’s also one of the most misunderstood.

But just because you can’t sleep, it doesn’t mean you have insomnia. You might have another sleep disorder, or none at all.

What is insomnia?

Let’s clear up some terms, and separate short-term or intermittent sleep problems from what health professionals call “insomnia disorder”.

Sleep problems can involve being awake when you want to be asleep. This could be lying in bed for ages trying to fall asleep, waking in the middle of the night for hours, or waking up too early. Having a sleep problem is a subjective experience – you don’t need to tally up lost hours to prove it’s a problem.

But insomnia disorder is the official term to describe a more problematic and persistent pattern of sleep difficulties. And this long-term or chronic sleep disorder has clear diagnostic criteria. These include at least three nights a week of poor sleep, lasting three months or more. These criteria help researchers and clinicians make sure they’re talking about the same thing, and not confusing it with another sleep problem.

So, what are some reasons why a sleep problem might not be insomnia?

1. It’s short term, or comes and goes

About a third of adults will have a bout of “acute insomnia” in a given year. This short-term problem is typically triggered by stress, illness or big life changes.

The good news is that about 72% of people with acute insomnia return to normal sleep after a few weeks.

Insomnia disorder is a longer-term, persistent problem.

2. It doesn’t affect you the next day

Some people lie awake at night but still function well during the day. More fragmented and less refreshing sleep is also a near-universal part of ageing.

So if your sleep problem doesn’t significantly affect you the next day, it usually isn’t considered to be insomnia.

For people with insomnia, the struggle with sleep spills into the day and affects their mood, energy, concentration and wellbeing. Worry and distress about not sleeping can then make the problem worse, which creates a frustrating cycle of worrying and not sleeping.

3. It’s more about work or caring

If you feel tired during the day, an important question is whether you’re giving yourself enough time to sleep. Sometimes sleep problems reflect a “sleep opportunity” that is too short or too irregular.

Work schedules, child care, or late-night commitments can cut sleep short, and sleep can slip down the priority list. In these cases, the problem is insufficient sleep, not insomnia.

You might have noisy neighbours or an annoying cat. These can also affect your sleep, and reduce your “sleep opportunity”.

The average healthy adult gets around seven hours sleep (though this varies widely). For someone who needs seven, it usually means setting aside about eight to allow for winding down, drifting off, and waking overnight.

4. It’s another sleep disorder

Other sleep disorders can look like insomnia, such as:

  • obstructive sleep apnoea (when your breathing stops multiple times during sleep) can cause frequent awakenings through the night and daytime sleepiness

  • restless legs syndrome creates an irresistible urge to move your legs in the evening that often interferes with falling asleep. It’s often described as jittery feelings or having “creepy crawlies”, and is often undiagnosed

  • circadian rhythm problems, such as being a natural night owl in an early-bird world, can also lead to trouble falling asleep.

5. Medications and substances are interfering

Caffeine, alcohol and nicotine all create insomnia symptoms and worsen the quality of sleep.

Certain medications can also interfere with sleep, such as stimulants (for conditions such as attention-deficit hyperactivity disorder or ADHD) and beta-blockers (for various heart conditions).

These issues need to be considered before labelling the problem as insomnia. However, it’s important to keep taking your medication as prescribed and discuss any concerns with your doctor.

Getting the right help

If your sleep is worrying you, the best first step is to see your GP. They can help rule out other causes, review your medications, or refer you for a sleep study if needed.

However, once insomnia becomes frequent, chronic (long term) and distressing, you can worry too much about your sleep, constantly check or track your sleep, or try too hard to sleep, for instance by spending too much time in bed. These psychological and behavioural mechanisms can backfire, and make good sleep even less likely.

That’s why “cognitive behavioural therapy for insomnia” (or CBT-I) is recommended as the first-line treatment.

This is more effective, and longer-lasting than sleeping pills. This therapy is available via specially trained GPs, and sleep psychologists. You can take part in person or online.

In the meantime

If you’re in a rough patch of sleep:

  • remind yourself that short runs of poor sleep usually settle on their own

  • avoid lying in bed panicking if you wake at 3.30am. Instead, step out of bed or use the time in a way that feels restful

  • keep a consistent wake-up time, even after a poor night. Try to get some morning sunlight to reset your body clock

  • make sure you’re putting aside the right amount of time for sleep – not too little, not too much.

The Conversation

Amelia Scott is a member of the psychology education subcommittee of the Australasian Sleep Association. She receives funding from Macquarie University.

ref. Do I have insomnia? 5 reasons why you might not – https://theconversation.com/do-i-have-insomnia-5-reasons-why-you-might-not-262701

Koalas are running out of time. Will a $140 million national park save them?

Source: The Conversation – Global Perspectives – By Christine Hosking, Conservation Planner/Researcher, The University of Queensland

In a historic move, the New South Wales government has announced a Great Koala National Park will be established on the state’s Mid North Coast, in a bid to protect vital koala habitat and stop the species’ sharp decline.

The reserve will combine existing national parks with newly protected state forest areas, to create 476,000 hectares of protected koala habitat. Logging will be phased out in certain areas, and a transition plan enacted for affected workers and communities.

Conservationists have welcomed the move as a win for biodiversity. However, some industry groups have raised concerns about the economic impact on the region’s timber operations.

The announcement, which follows a long campaign by koala advocates, shows the NSW government recognises the importance of protecting biodiversity. But announcing the national park is just the first step in saving this iconic species.

A worrying decline

Koalas are notoriously hard to count, because they are widely distributed and difficult to spot.

In 2016, a panel of 15 koala experts estimated a decline in koala populations of 24% over the past three generations and the next three generations.

Habitat loss and fragmentation is the number one threat to koalas. Others include climate change, bushfires, disease, vehicle strikes and dog attacks.

The decline gave momentum to calls by conservationists and scientists for the establishment of a Great Koala National Park, taking in important koala habitat on the NSW Mid North Coast.

In 2023, the NSW government pledged A$80 million to create the park. The announcement on Sunday increased the pledge to $140 million.

Announcing the development, NSW Premier Chris Minns said it was “unthinkable” that koalas were at risk of extinction in that state.

The government also proposed the park’s boundary and announced a temporary moratorium on timber harvesting within it – as well as a support package for logging workers, industries and communities.

However, the logging industry remains opposed to the plan.

Not the end of the story

The creation of the park is a welcome move. It will protect not just koalas but many other native species, large and small.

But on its own, it’s not enough to save the NSW koala population. Even within the national park, threats to koalas will remain.

For example, research shows climate change – and associated heat and less rainfall – threatens the trees koalas use for food and shelter. Climate extremes also physically stress koalas. This and other combined stresses can make koalas more prone to disease.

Bushfires, and inappropriate fire management, can degrade koala habitat and injure or kill them outright.

The NSW government says logging must immediately cease in areas to be brought into the park’s boundary. However, logging pressures can remain, even after national parks are declared. Forestry activities must cease completely, and forever, if the park is to truly protect koalas.

What’s more, recreational activities, if allowed in the national park, may negatively impact koalas. For example, cutting tracks or building tourist facilities may fragment koala habitat and disturb shy wildlife.

These threats must be managed to ensure the Great Koala National Park achieves its aims.

Prioritising nature

Of course, the creation of a new national park does not help koalas outside the park’s boundaries. Koala populations are under threat across their range in NSW, Queensland and the ACT.

That’s why the national recovery plan for the koala should be implemented urgently and in full. It includes increasing the area of protected koala habitat, restoring degraded habitat, and actively conserving populations. It also includes ending habitat destruction by embedding koala protections in land-use planning.

As I have previously written, koala protection areas should be replicated throughout the NSW and Queensland hinterlands. My research shows the future climate will remain suitable for koalas in those areas.

And logging must be curbed elsewhere in Australia, such as in Tasmania, where it jeopardises threatened species and ancient forests.

The Great National Koala Park promises be a sanctuary for koalas and other wildlife, and a special place for passive, nature-based recreation and tourism. Yes, the plan has detractors. But saving Australia’s koalas means prioritising nature’s needs over that of people.

And we must not forget: the national park is just one step on a long road to preventing koala extinctions.

The Conversation

Christine Hosking does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Koalas are running out of time. Will a $140 million national park save them? – https://theconversation.com/koalas-are-running-out-of-time-will-a-140-million-national-park-save-them-264789