Old rat nests can contain fabrics, papers, animal bones, plant remains and other materials that have been undisturbed for hundreds of years. Andyworks/E+ Collection via Getty Images
Rats and other rodents and pests can make great archivists.
That’s because they forage food and build dens, storing fabric, paper, animal bones, plant remains and other materials under floorboards, behind walls and in attics, crawl spaces and wells. There, these materials might dry out and remain undisturbed for hundreds of years.
I studied a rat nest that was used by generations of rats over several decades and was found under the floorboards in the attic of the historic home at Bartram’s Garden in southwest Philadelphia. In 1728, Quaker farmer and naturalist John Bartram began to plant his garden, which is considered the oldest botanic garden in North America. I studied thousands of plants collected by rats and learned how the Bartram family used these plants for food, medicine, trade and study.
Rat nests are common in historic structures, particularly homes like Bartram’s that contained kitchens and buildings that were used for food storage, such as cellars.
Bartram collected plants from around eastern North America along with those sent to him by naturalists in Europe. His sons, John Jr. and William, and later his granddaughter Ann Bartram Carr, continued to expand the garden, which gained international fame during the 18th and 19th centuries.
The rat nest was discovered during historic preservation work at the Bartram home in 1977. My analysis of the materials in the nest indicates that it was formed in the late 18th and early 19th century. The materials are representative of the plants rodents would have been foraging from the Bartram home and garden.
The plants I identified weren’t restricted to those sold by the Bartram family as a part of their nursery business. Nor were they limited to plants that were traded between naturalists hoping to learn more about the flora of the American Colonies. They included crops such as wheat, buckwheat, corn, parsnips and beans grown by the family to feed themselves; herbs such as lemongrass, basil and mint used for medicine by the family; and many wild and weedy plants – for example, brambles, corn cockle, and broom and needle grasses – that were not intentionally grown by the Bartrams but were nonetheless collected by the rats on the property.
Materials from the rat nest in the process of being sorted by the author, including hickory, walnuts, acorns, corn and peanuts. Alexandria Mitchem Hansen, CC BY-NC-SA
By studying the plants foraged by these rats, I learned not only about the important scientific and commercial plants in the garden, but also about the food and medicine the family were eating and using, including imported snacks such as peanuts and Brazil nuts, which were not grown in the garden but could have been purchased in Philadelphia.
Sorting 11 pounds of material
I am an archaeobotanist, which means I recover and identify plants from the past.
Over the course of almost three years, I sorted through over 11 pounds (5 kilograms) of material from the rat nest recovered from the Bartrams’ home and stored at the Center for the Analysis of Archaeological Materials at the Penn Museum.
Because there is often a lot of material, archaeologists divide these kinds of samples using geological sieves, which are scientific screening tools that filter samples by size. This makes the material easier to sort.
Then I used a microscope to sort and identify the plants therein. Archaeobotanists find various parts of plants, including seeds, chaff, fruit pits, nutshells and cobs. The plants I identified ranged in size from whole corncobs to weed seeds smaller than half a millimeter.
To identify the species of plants, I used reference manuals, comparative collections of plant seeds and other parts, and help from the archaeobotanists at the Penn Museum. I also studied images from herbaria, which are collections of historic plants that have been preserved and archived.
In the future, I plan to focus on the weedy plants recovered from the rat nest. The majority of invasive species in the United States were originally introduced in horticultural contexts, including botanic gardens and nurseries. Data from Bartram’s Garden will help me and other scholars better understand the timing and details of this process.
Alexandria Mitchem Hansen receives funding from the McNeil Center for Early American Studies, the American Philosophical Society, the Explorer’s Club, the Society for Historical Archaeology, the Society for Ethnobiology, and Columbia University.
Meta’s decision to end its professional fact-checking program sparked a wave of criticism in the tech and media world. Critics warned that dropping expert oversight could erode trust and reliability in the digital information landscape, especially when profit-driven platforms are mostly left to police themselves.
What much of this debate has overlooked, however, is that today, AI large language models are increasingly used to write up news summaries, headlines and content that catch your attention long before traditional content moderation mechanisms can step in. The issue isn’t clear-cut cases of misinformation or harmful subject matter going unflagged in the absence of content moderation. What’s missing from the discussion is how ostensibly accurate information is selected, framed and emphasized in ways that can shape public perception.
Large language models gradually influence the way people form opinions by generating the information that chatbots and virtual assistants present to people over time. These models are now also being built into news sites, social media platforms and search services, making them the primary gateway to obtain information.
Studies show that large language models do more than simply pass along information. Their responses can subtly highlight certain viewpoints while minimizing others, often without users realizing it.
Communication bias
My colleague, computer scientist Stefan Schmid, and I, a technology law and policy scholar, show in a forthcoming accepted paper in the journal Communications of the ACM that large language models exhibit communication bias. We found that they may have a tendency to highlight particular perspectives while omitting or diminishing others. Such bias can influence how users think or feel, regardless of whether the information presented is true or false.
Empirical research over the past few years has produced benchmark datasets that correlate model outputs with party positions before and during elections. They reveal variations in how current large language models deal with public content. Depending on the persona or context used in prompting large language models, current models subtly tilt toward particular positions – even when factual accuracy remains intact.
These shifts point to an emerging form of persona-based steerability – a model’s tendency to align its tone and emphasis with the perceived expectations of the user. For instance, when a user describes themselves as an environmental activist and another as a business owner, a model may answer the same question about a new climate law by emphasizing different, yet factually accurate, concerns for each of them. For example, the criticisms could be that the law does not go far enough in promoting environmental benefits and that the law imposes regulatory burdens and compliance costs.
Such alignment can easily be misread as flattery. The phenomenon is called sycophancy: Models effectively tell users what they want to hear. But while sycophancy is a symptom of user-model interaction, communication bias runs deeper. It reflects disparities in who designs and builds these systems, what datasets they draw from and which incentives drive their refinement. When a handful of developers dominate the large language model market and their systems consistently present some viewpoints more favorably than others, small differences in model behavior can scale into significant distortions in public communication.
Bias in large language models starts with the data they’re trained on.
What regulation can and can’t do
Modern society increasingly relies on large language models as the primary interface between people and information. Governments worldwide have launched policies to address concerns over AI bias. For instance, the European Union’s AI Act and the Digital Services Act attempt to impose transparency and accountability. But neither is designed to address the nuanced issue of communication bias in AI outputs.
Proponents of AI regulation often cite neutral AI as a goal, but true neutrality is often unattainable. AI systems reflect the biases embedded in their data, training and design, and attempts to regulate such bias often end up trading one flavor of bias for another.
And communication bias is not just about accuracy – it is about content generation and framing. Imagine asking an AI system a question about a contentious piece of legislation. The model’s answer is not only shaped by facts, but also by how those facts are presented, which sources are highlighted and the tone and viewpoint it adopts.
This means that the root of the bias problem is not merely in addressing biased training data or skewed outputs, but in the market structures that shape technology design in the first place. When only a few large language models have access to information, the risk of communication bias grows. Apart from regulation, then, effective bias mitigation requires safeguarding competition, user-driven accountability and regulatory openness to different ways of building and offering large language models.
Most regulations so far aim at banning harmful outputs after the technology’s deployment, or forcing companies to run audits before launch. Our analysis shows that while prelaunch checks and post-deployment oversight may catch the most glaring errors, they may be less effective at addressing subtle communication bias that emerges through user interactions.
Beyond AI regulation
It is tempting to expect that regulation can eliminate all biases in AI systems. In some instances, these policies can be helpful, but they tend to fail to address a deeper issue: the incentives that determine the technologies that communicate information to the public.
Our findings clarify that a more lasting solution lies in fostering competition, transparency and meaningful user participation, enabling consumers to play an active role in how companies design, test and deploy large language models.
The reason these policies are important is that, ultimately, AI will not only influence the information we seek and the daily news we read, but it will also play a crucial part in shaping the kind of society we envision for the future.
Adrian Kuenzler does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
It was like a horror movie. The invisible polio virus would strike, leaving young children on crutches, in wheelchairs or in a dreaded “iron lung” ventilator. Each summer, the fear was so great that public pools and movie theaters closed. Parents canceled birthday parties, afraid their child might be the next victim. A U.S. president paralyzed by polio called for Americans to send dimes to the White House to support the nonprofit National Foundation for Infantile Paralysis, established by President Franklin D. Roosevelt and his lawyer, Basil O’Connor. Celebrities from Lucille Ball to Elvis were enlisted to promote this “March of Dimes,” and mothers went door to door raising funds to conquer this dreaded disease.
Some of those funds went to 33-year-old scientist Jonas Salk and his team at the University of Pittsburgh, where they worked in a lab between a morgue and a darkroom to develop the world’s first successful polio vaccine.
When asked whether he was going to patent the vaccine, Salk told journalist Edward R. Murrow it belonged to the people and would be like “patenting the sun.”
I first learned about this 20 years ago when my students and I filmed the 50th anniversary celebration of the Salk polio vaccine at the University of Pittsburgh. I had just started teaching after working in Los Angeles as a screenwriter and TV producer, and the footage became “The Shot Felt Round the World,” a documentary that featured those we met that day.
A nurse prepares children for a polio vaccine shot in February 1954 as part of a citywide testing of the vaccine on Pittsburgh elementary school students. Bettmann/Bettmann Collection via Getty Images
The ‘Pittsburgh polio pioneers’
Among the people we interviewed was Ethyl “Mickey” Bailey, who worked in the lab pipetting the deadly polio virus by mouth, and Julius Youngner, the lab’s senior scientist who had worked on the Manhattan Project before coming to Pittsburgh. Within a decade, Youngner had worked on both the atomic bomb – which killed tens of thousands of people in just seconds in Hiroshima and Nagasaki, and hundreds of thousands more in the aftermath of the bombings – as well as the Salk vaccine, which spared millions from the scourge of “The Great Crippler.”
Three floors above the lab, Dr. Sidney Busis performed tracheotomies on 2-year-old iron lung patients, opening their windpipes so the ventilator could help them breath. The fierce Dr. Jesse Wright, an innovator in the field of rehabilitation sciences, ran the polio ward and she was also the medical director of the D.T. Watson Home for Crippled Children, where the Salk vaccine was first tested on humans. Polio victims like Jimmy Sarkett and Ron Flynn volunteered themselves as guinea pigs for a vaccine they knew would never benefit them.
Many “Pittsburgh polio pioneers,” as they called the local children who were given Salk’s still-experimental vaccine, in our documentary recalled getting the shot from Dr. Salk himself. Salk also gave it to his own children, including his eldest son Peter, then 10 years old, who later worked with his father on trying to develop an AIDS vaccine.
Kathy Dressel, a 3-year-old poster girl for the March of Dimes in Pennsylvania, smiles as she is greeted by Basil O’Connor, president of the National Foundation for Infantile Paralysis, in 1954. Bettmann/Bettmann Collection via Getty Images
Near the end of his life, Jonas would say sometimes he would run into people who didn’t know what polio was, and he found that gratifying. But today the world is paying a high price for those who don’t remember what life was like before these events and now question the value of vaccines. The polio virus may not be visible, but it is still with us.
The final mile to eradication
On Oct. 24, 2025, as the Salk vaccine turned 70, I was invited to screen the trailer for “The Shot Felt Round the World” at a World Polio Day event on Roosevelt Island in New York City, in a building next to the ruins of the Smallpox Hospital – a legacy of the only human disease ever eradicated.
Those present included the executive director of UNICEF, the polio director from the Gates Foundation, the U.N. representative for Rotary International, and government officials from around the world who spoke about the global coalition dedicated to eradicating this disease. Since the 1980s, the Global Polio Eradication Initiative has put tremendous resources into taking polio from being endemic in 125 countries to now just in two: Pakistan and Afghanistan. This group whom I like to call “The Avengers of Public Health,” continue to work relentlessly to make the world polio-free.
An Afghan health worker administers polio vaccine to a child in Kabul in 2010. Afghanistan and Pakistan are the only two countries where polio has not yet been eradicated. Shah Marai/AFP via Getty Images
My greatest fear is that when polio is finally defeated, the world won’t recognize what an extraordinary achievement it is. In our film, Dr. Jonathan Salk, Jonas Salk’s youngest son, recalls his father wondering whether the model that developed the polio vaccine could be used to conquer poverty and other social problems.
Many of the polio survivors we spoke to at the 50th anniversary are no longer with us. To ensure future generations know this story, perhaps now is the time to launch a “March of Dimes” marketing techniques to engage young people from around the world to help finish the job that began in the Salk lab in Pittsburgh.
One polio survivor who is still alive is “The Godfather” director Francis Ford Coppola, who has spoken about contracting polio as a child. Imagine him being interviewed by his granddaughter Romy Mars, a TikTok influencer, and his daughter Sophia Coppola, the film director and actress. They could make a video that features cameos from actor and comedian Bill Murray, who played Franklin D. Roosevelt in a movie and whose sister had polio, U.S. Senator Mitch McConnell, who is a polio survivor, and Secretary of State Marco Rubio, whose grandfather was crippled from polio. For such a cruel disease, polio has a strange way of bringing us together.
I pray when we finally wipe polio off the planet, a feat the Global Polio Eradication Initiative targets for 2029, the whole world will celebrate and realize the power of pulling together to defeat a common enemy.
Carl Kurlander has previously received grants from the Grable Foundation, the Pittsburgh Foundation, and the R.K. Mellon Foundation years ago for the making of the polio movie. He receives no residuals or revenues from thef ilm.
Dec. 15, 2025 – the deadline for enrolling in a marketplace plan through the Affordable Care Act for 2026 – came and went without an agreement on the federal subsidies that kept ACA plans more affordable for many Americans. Despite a last-ditch attempt in the House to extend ACA subsidies, with Congress adjourning for the year on Dec. 19, it’s looking almost certain that Americans relying on ACA subsidies will face a steep increase in health care costs in 2026.
As a gerontologist who studies the U.S. health care system, I’m aware that disagreements about health care in America have a long history. The main bone of contention is whether providing health care is the responsibility of the government, or of individuals or their employers.
The ACA, passed in 2010 as the country’s first major piece of health legislation since the passage of Medicare and Medicaid in 1965, represents one more chapter in that long-standing debate. That debate explains why the health law has fueled so much political divisiveness – including a standoff that spurred a record-breaking 43-day-long government shutdown, which began on Oct. 1, 2025.
In my view, regardless of how Congress resolves, or doesn’t resolve, the current dispute over ACA subsidies, a durable U.S. health care policy will remain out of reach until lawmakers address the core question of who should shoulder the cost of health care.
The ACA’s roots
In the years before the ACA’s passage, some 49 million Americans – 15% of the population – lacked health insurance. This number had been rising in the wake of the 2008 recession. That’s because the majority of Americans ages 18 to 64 with health insurance receive their health benefits through their employer. In the 2008 downturn, people who lost their jobs basically lost their health care coverage.
But two strategies in particular had the biggest impact on the number of uninsured. One was expanding the Medicaid program to include workers whose income was below 138% of the poverty line. The other was providing subsidies to people with low and moderate incomes that could help them buy health insurance through the ACA marketplace, a state or federal health exchange through which consumers could choose health insurance plans.
Meanwhile, the marketplace subsidies, which were designed to help people who were working but could not access an employer-based health plan, were not especially contentious early on. Everyone receiving a subsidy was required to contribute to their insurance plan’s monthly premium. People earning US$18,000 or less annually, which in 2010 was 115% of the income threshold set by the federal government as poverty level, contributed 2.1% of their plan’s cost, and those earning $60,240, which was 400% of the federal poverty level, contributed 10%. People making more than that were not eligible for subsidies at all.
In 2021, legislation passed by the Biden administration to stave off the economic impact of the COVID-19 pandemic increased the subsidy that people could receive. The law eliminated premiums entirely for the lowest income people and reduced the cost for those earning more. And, unlike before, people making more than 400% of the federal poverty level – about 10% of marketplace enrollees – could also get a subsidy.
These pandemic-era subsidies are set to expire at the end of 2025.
Cost versus coverage
If the COVID-19-era subsidies expire, health care costs would increase substantially for most consumers, as ACA subsidies return to their original levels. So someone making $45,000 annually will now need to pay $360 a month for health insurance, increasing their payment by 74%, or $153 monthly. What’s more, these changes come on top of price hikes to insurance plans themselves, which are estimated to increase by about 18% in 2026.
With these two factors combined, many ACA marketplace users could see their health insurance cost rise more than 100%. Some proponents of extending COVID-19-era subsidies contend that the rollback will result in an estimated 6 million to 7 million people leaving the ACA marketplace and that some 5 million of these Americans could become uninsured in 2026.
Congressional gridlock over a health care bill continues.
Policies in the tax and spending package signed into law by President Donald Trump in July 2025 are amplifying the challenge of keeping Americans insured. The Congressional Budget Office projects that the Medicaid cuts alone, stipulated in the package, may result in more than 7 million people becoming uninsured. Combined with other policy changes outlined in the law and the rollback of the ACA subsidies, that number could hit 16 million by 2034 – essentially wiping out the majority of gains in health insurance coverage that the ACA achieved since 2010.
Subsidy downsides
These enhanced ACA subsidies are so divisive now in part because they have dramatically driven up the federal government’s health care bill. Between 2021 and 2024, the number of people receiving subsidies doubled – resulting in many more people having health insurance, but also increasing federal ACA expenditures.
Those who oppose the extension counter that the subsidies cost the government too much and fund high earners who don’t need government support – and that temporary emergencies, even ones as serious as a pandemic, should not result in permanent changes.
In 2010, 92% of employers with 25 to 49 workers offered health insurance, but by 2025, that proportion had dropped to 64%, suggesting that companies of this size are allowing the ACA to cover their employees.
Federal policy clearly shapes health insurance coverage, but state-level policies play a role too. Nationally, about 8% of people under age 65 were uninsured in 2023, yet that rate varied widely – from 3% in Massachusetts to 18.6% in Texas. States under Republican leadership on average have a higher percentage of uninsured people than do those under Democratic leadership, mirroring the political differences driving the national debate over who is responsible for shouldering the costs of health care.
With dueling ideologies come dueling solutions. For those who believe that the government is responsible for the health of its citizens, expanding health insurance coverage and financing this expansion through taxes presents a clear approach. For those who say the burden should fall on individuals, reliance on the free market drives the fix – on the premise that competition between health insurers and providers offers a more effective way to solve the cost challenges than a government intervention.
Without finding resolution on this core issue, the U.S. will likely still be embroiled in this same debate for years, if not decades, to come.
Robert Applebaum does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – USA (3) – By Mary J. Scourboutakos, Adjunct Assistant Professor in Family and Community Medicine, Macon & Joan Brock Virginia Health Sciences at Old Dominion University
Since researchers first established the link between diet, cholesterol and heart disease in the 1950s, risk for heart disease has been partly assessed based on a patient’s cholesterol levels, which can be routinely measured via blood work at the doctor’s office.
As a result, in September 2025, the American College of Cardiology published new recommendations for universal screening of C-reactive protein levels in all patients, alongside measuring cholesterol levels.
What is C-reactive protein?
C-reactive protein is created by the liver in response to infections, tissue damage, chronic inflammatory states from conditions like autoimmune diseases, and metabolic disturbances like obesity and diabetes. Essentially, it is a marker of inflammation – meaning immune system activation – in the body.
C-reactive protein can be easily measured with blood work at the doctor’s office. A low C-reactive protein level – under 1 milligram per deciliter – signifies minimal inflammation in the body, which is protective against heart disease. An elevated C-reactive protein level of greater than 3 milligrams per deciliter, signifies increased levels of inflammation and thus increased risk for heart disease. About 52% of Americans have an elevated level of C-reactive protein in their blood.
Inflammation plays a crucial role at every stage in the development and buildup of fatty plaque in the arteries, which causes a condition called atherosclerosis that can lead to heart attacks and strokes.
From the moment a blood vessel is damaged, be it from high blood sugar or cigarette smoke, immune cells immediately infiltrate the area. Those immune cells subsequently engulf cholesterol particles that are typically floating around in the blood stream to form a fatty plaque that resides in the wall of the vessel.
This process continues for decades until eventually, one day, immune mediators rupture the cap that encloses the plaque. This triggers the formation of a blood clot that obstructs blood flow, starves the surrounding tissues of oxygen and ultimately causes a heart attack or stroke.
Hence, cholesterol is only part of the story; it is, in fact, the immune system that facilitates each step in the processes that drive heart disease.
Fatty plaque buildup in the arteries causes a blockage that starves tissues of oxygen and can lead to a heart attack or stroke. wildpixel/iStock via Getty Images Plus
Does cholesterol still matter for heart disease risk?
Though cholesterol may not be the most important predictor of risk for heart disease, it does remain highly relevant.
However, it’s not just the amount of cholesterol – or more specifically the amount of bad, or LDL, cholesterol – that matters. Two people with the same cholesterol level don’t necessarily have the same risk for heart disease. This is because risk is determined more so by the number of particles that the bad cholesterol is packaged into, as opposed to the total mass of bad cholesterol that’s floating around. More particles means higher risk.
Furthermore, lipoprotein(a), a protein that lives in the wall surrounding cholesterol particles, is another marker that can predict heart disease more accurately than cholesterol levels. This is because the presence of lipoprotein(a) makes cholesterol particles sticky, so to speak, and thus more likely to get trapped in an atherosclerotic plaque.
However, unlike other risk factors, lipoprotein(a) levels are purely genetic, thus not influenced by lifestyle, and need only be measured once in a lifetime.
What’s the best way to prevent heart disease?
Ultimately, heart disease is the product of many risk factors and their interactions over a lifetime.
Knowing your LDL cholesterol level alongside your C-reactive protein, apolipoprotein B and lipoprotein (a) levels paints a comprehensive picture of risk that can hopefully help motivate long-term commitment to the fundamentals of heart disease prevention. These include eating well, exercising consistently, getting adequate sleep, managing stress productively, maintaining healthy weight and, if applicable, quitting smoking.
Mary J. Scourboutakos does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Have you ever wondered what keeps you warm in your winter jacket? Most jacket insulation is made from human made synthetic fibres (polyester) or natural down from ducks or geese. Some winter jackets are insulated with something a little more surprising – bulrushes.
A biomaterials company called Ponda is using the seed heads of bulrush cultivated in peatlands to create BioPuff as insulation for puffer jackets, an alternative to synthetic fibres and goose down. These jackets help to encourage wetter farming on peatlands, a practice known as paludiculture that helps keep carbon locked into the ground.
While paludiculture is a relatively new way of farming in the UK, my research investigates how this emerging farming practice is being implemented in north-west England.
It is crucial that peatlands remain wet or are rewetted to prevent the release of stored carbon. Once drained, peatlands emit a significant amount of carbon – degraded peatlands account for 4% of the UK’s total greenhouse gas emissions. Most (88%) of these emissions come from degraded lowland peatlands, which account for only 16% of the UK’s total peatland land area.
While the complete restoration of lowland peatland habitats is necessary, in many cases landowners and managers may not be willing to fully stop cultivating or grazing on parts of their agricultural peatland. Paludiculture has been proposed by UK policymakers and researchers as an innovative farming practice. In this scenario, peat soils remain wet to reduce peatlands’ carbon emissions. Simultaneously, landowners and managers can theoretically make an income from cultivating paludiculture crops.
The UK Paludiculture Live list consists of 88 native species that could be used for farming via paludiculture. This list is divided into categories including food crops (such as cranberry and celery), growing media (Sphagnum moss), fabrics (bulrush) and construction materials (such as common reed and freshwater bulrush).
Crop trials
Over the past five years there has been a growing network of researchers, landowners, land managers, conservationists, businesses and government advisors innovating and implementing paludiculture trials in north-west England. Celery, lettuce, blueberries, bulrush, and Sphagnum moss are some of first paludiculture crops that have been grown in this region.
One of the trials, delivered in partnership with the Lancashire Wildlife Trust, a tenant farmer, the landowner and Ponda, shows how paludiculture offers an opportunity for both the farming community and the sustainable fashion industry.
This trial was established with the aim to grow bulrush on five hectares (12 acres) of previously drained lowland peat soils.
After raising the water table level to between 30cm below ground level and the peat surface, the bulrush seeds were sown in June 2024 using a drone. More than a year later, the bulrush was successfully harvested in August 2025 using a specialised digger equipped with a reed-cutting bucket.
Bulrush seeds being sown by a drone at one of Lancashire Wildlife Trust’s paludiculture trial sites.
This trial was successful due to collaboration between the organisations and people in the partnership who shared paludiculture knowledge that specifically related to this region and farming practices on lowland peatlands elsewhere in the UK.
Additionally, it is crucial that paludiculture crops are supported by a concrete business case and market route so that landowners and land managers do not have to rely on variable government funding.
Uncharted waters
While paludiculture has progressed in the UK over the past five years, there are still challenges in upscaling this farming practice.
In terms of food crops, supermarkets may not accept paludiculture grown celery or lettuce if they do not match retailer requirements. The entire paludiculture market chain faces barriers from cultivation to commercialism.
These include challenges such as managing water table levels, having robust storage, handling, and processing infrastructure, market regulations and the market visibility of paludiculture products. These hurdles can make it difficult to expand trials up to larger farm and landscape scales.
Because much of the UK’s peatlands are owned by private landowners and often managed by tenant farmers, paludiculture must develop as a financially stable farming practice to ensure there is buy in from everyone involved.
However, transitioning from conventional drainage practices to wetter farming is not just a financial matter. Landowners, farmers and peatland practitioners must acquire new peatland rewetting knowledge and be willing to grow crops on wet soils. The paludiculture trial in the north-west demonstrates how these partnerships can form and help pave the way for more wetter peatland systems.
The next time you pass a wetland area, see if you can spot a bulrush. These boggy plants can help tackle climate change by storing carbon and could even be transformed into your next puffer jacket.
Don’t have time to read about climate change as much as you’d like?
Source: The Conversation – Global Perspectives – By Vitomir Kovanovic, Associate Professor and Associate Director of the Centre for Change and Complexity in Learning (C3L), Education Futures, University of South Australia
Over the past three years, generative artificial intelligence (AI) has had a profound impact on society. AI’s impact on human writing, in particular, has been enormous.
The large language models that power AI tools such as ChatGPT are trained on a wide variety of textual data, and they can now produce complex and high-quality texts of their own.
Most importantly, the widespread use of AI tools has resulted in hyperproduction of so-called “AI slop”: low-quality AI-generated outputs produced with minimal or even no human effort.
Much has been said about what AI writing means for education, work, and culture. But what about science? Does AI improve academic writing, or does it merely produce “scientific AI slop”?
According to a new study by researchers from UC Berkeley and Cornell University, published in Science, the slop is winning.
Generative AI boosts academic productivity
The researchers analysed abstracts from more than a million preprint articles (publicly available articles yet to undergo peer review) released between 2018 and 2024.
They examined whether use of AI is linked to higher academic productivity, manuscript quality and use of more diverse literature.
The number of preprints an author produced was a measure of their productivity, while eventual publication in a journal was a measure of an article’s quality.
The study found that when an author started using AI, the number of preprints they produced increased dramatically. Depending on the preprint platform, the overall number of articles an author published per month after adopting AI increased between 36.2% and 59.8%.
The increase was biggest among non-native English speakers, and especially for Asian authors, where it ranged from 43% to 89.3%. For authors from English-speaking institutions and with “Caucasian” names, the increase was more modest, in the range of 23.7% to 46.2%.
These results suggest AI was often used by non-native speakers to improve their written English.
What about the article quality?
The study found articles written with AI used more complex language on average than those written without AI.
However, among articles written without AI, ones that used more complex language were more likely to be published.
This suggests that more complex and high-quality writing is perceived as having greater scientific merit.
However, when it comes to articles written with AI support, this relationship was reversed – the more complex the language, the less likely the article was to be published. This suggests that AI-generated complex language was used to hide the low quality of the scholarly work.
AI increased the variety of academic sources
The study also looked at the differences in article downloads originating from Google and Microsoft search platforms.
Microsoft’s Bing search engine introduced an AI-powered Bing Chat feature in February 2023. This allowed the researchers to compare what kind of articles were recommended by AI-enhanced search versus regular search engine.
Interestingly, Bing users were exposed to a greater variety of sources than Google users, and also to more recent publications. This is likely caused by a technique used by Bing Chat called retrieval-augmented generation, which combines search results with AI prompting.
In any case, fears that AI search would be “stuck” recommending old, widely used sources was not justified.
Moving forward
AI has had significant impact on scientific writing and academic publishing. It has become an integral part of academic writing for many scientists, especially for non-native speakers and it is here to stay.
As AI is becoming embedded in many applications such as word processors, email apps, and spreadsheets, it will be soon impossible not to use AI whether we like it or not.
Most importantly for science, AI is challenging the use of complex high-quality language as the indicator of scholarly merit. Quick screening and evaluation of articles based on language quality is increasingly unreliable and better methods are urgently needed.
As complex language is increasingly used to cover up weak scholarly contributions, critical and in-depth evaluations of study methodologies and contributions during peer review are essential.
One approach is to “fight fire with fire” and use AI review tools, such as the one recently published by Andrew Ng at Stanford. Given the ever-growing number of manuscript submissions and already high workload of academic journal editors, such approaches might be the only viable option.
Vitomir Kovanovic does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Nigerian rapper, actor and social media star Falz released his sixth studio album, The Feast, in 2025.
Few Nigerian popular musicians have shown as much versatility and staying power as the man behind the #ElloBae and #WehDoneSir social media trends. For over a decade now, Falz has been marrying musical skills and social activism with digital savvy and comedy.
His rise to global prominence was solidified with his 2018 song This is Nigeria. But it began in 2014 with Marry Me off his debut album Wazup Guy.
As a young artist known for his video skits, he created an online challenge ahead of releasing the song Ello Bae (Hello Babe). In it he tries to romance a woman who appreciates him and his ambition, but is looking for a man with money. It remains a common hashtag when TikTokers post about love and money.
In 2017 he released Wehdone Sir (Well Done, Sir), a witty takedown of people with fake glamour lifestyles. #WehDoneSir is still used on social media to satirise pretentious individuals.
Falz would become known for his unique blend of hip-hop and Afropop, but what really made him stand out was his skill at infusing humour into his socially conscious, often revolutionary, songs.
It’s oftenargued that Falz is a natural heir to Fela Anikulapo-Kuti. He was the Nigerian music legend and activist who helped create the Afrobeat movement (a precursor to today’s Afrobeats).
Like Fela, Falz packs his music with playfulness and satire while also stirring public consciousness with activist lyrics. Both call for action against the oppressive political class. In 2020, when young Nigerians took to the streets to demand an end to police corruption, Fela and Falz were both part of the inventory of #EndSARSprotest songs.
As a scholar of Nigerian hip-hop, I have published papers on Fela and Falz and how they have shaped protest music that responds to social challenges in Nigeria.
So, who is Falz, and how has he spread his message – and come to be the political voice of his generation, as Fela was to his?
Who is Falz?
Falz (real name Folarin Falana) was born in 1990 in Mushin, Lagos. He is the son of a respected human rights lawyer and activist father, Femi Falana, and lawyer mother, Funmi Falana. In fact, his father was Fela’s lawyer, defending him against charges brought by the state.
Falz also qualified as a lawyer, but chose instead to pursue his interests in music and acting. These multiple skills feed into his productions on diverse levels. Beyond his songs, he is also very active on Instagram and Tik-Tok, where he establishes trends, especially around his songs and films.
His character in Ello Bae, for instance, struggles with English, using big formal words in unexpected ways, finding comedy in his faux Yoruba inflections. It would be a trademark of the #ElloBaeChallenge and would enjoy renewed public attention when Falz was cast in the TV series Jenifa’s Diary playing a similar character.
In 2016, Falz won best new international act at the BET Awards in the US. Numerous other awards would follow. His albums have received commercial and critical success. His roles in movies have further solidified his status as a multitalented entertainer.
Activism
Falz does not shy away from living the talk. He took part in the 2020 #EndSARS protests and his work repeatedly tries to steer the government towards addressing socio-economic challenges.
Soon after the protests, he released Moral Instruction. On the album, the track Johnny depicts the everyday experiences of Nigerians. This is Nigeria, a localised version of US rapper Childish Gambino’s This is America, depicts Nigeria as a country struggling with corruption, lawlessness and social injustice. A stark contrast to its potential. The video reflects a breakdown in law and order, corrupt officials, and the struggles of young people facing limited opportunities and resorting to crime.
Falz has used his platform as a celebrity and his background as a lawyer to call for social justice and for young people to make a difference.
Fela and Falz
There have been a number of pretend heirs to Fela’s throne of musical consciousness. Many of these have either not lived up to the hype or have fizzled out.
However, many popular Nigerian artists leverage Fela’s ethos through sampling his beats and lyrics. This is evident in Falz’s musicography too.
My study on the lyrical and thematic connections between Fela and Falz songs compares a number of tracks. Fela’s No Agreement and Falz’s Talk, for example, both draw attention to social inequality and systemic challenges in Nigeria.
Fela’s song was produced in the context of a military regime while Falz’s was within a democratic dispensation. But both speak of a crisis of leadership in Nigeria, as is the case in many postcolonial societies. What particularly links Fela and Falz is that both are unrelenting in their revolutionary struggles and determination to ensure an equitable Nigerian society.
Religious leaders are not spared criticism. Echoing Fela’s Coffin for Head of State (1980), Falz’s Amen (2019) points to the deceptive practices and complicity of religious leaders in poor political leadership and endemic poverty. Both critique the double standards that have become normal in the country.
Falz’s Follow Follow (2019) addresses current realities in Nigerian society – a lack of personal conviction and independent thought and the mindless following of social media trends. Integrating lyrics from Fela’s Zombie (1976), the song is about asserting one’s identity. It also rehashes Fela’s Follow Follow, mocking those who allow themselves to be led blindly by others.
To make sure his advocacy resonates, Falz co-opts his listeners through a call-and-response strategy. A phrase is sung and the next phrase answers it. This way, along with catchy lyrics, the audience become active participants.
This also echoes the traditional Yorubachant-and-refrain rendition used by musicians, poets and bards to engage their audience. Its possible nod to the indigenous is also at the heart of his faux Yoruba accent, a style that downplays his prestigious upbringing and connects him to ordinary people, much like Pidgin did for Fela.
But echoes of Fela don’t in any way take away from the creative force of Falz’s work. Rather they reinforce his critique of how the postcolonial Nigerian state has failed to live up to its promise.
Into the future
While Fela was unrepentantly anticolonial, Falz is sublimely hybridised. His mixture of talents and views creates a pulsating pan-African consciousness that’s able to exist in a global contemporary world view.
His lyrics and videography are aimed at the masses – especially young people – who have the most to gain from positive social change. In this way Falz can be said to represent a generational conscience. He uses his empowering songs to motivate his fans to take their destinies in their own hands.
Paul Onanuga does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Jonathan Este, Senior International Affairs Editor, Associate Editor, The Conversation
This newsletter was first published in The Conversation UK’s World Affairs Briefing email. Sign up to receive weekly analysis of the latest developments in international relations, direct to your inbox.
Volodymyr Zelensky says there will almost certainly be no ceasefire in Ukraine before Christmas. This means the war is more than likely to stretch on into a fifth year to the dismay of everyone – barring, perhaps Vladimir Putin, for whom the war seems to be a means to a number of different ends.
Whatever the Russian president wants to gain immediately – prestige, territory, a pliant government in Kyiv, access to eastern Ukraine’s considerable resources – the war also appears to be fulfilling a number of Putin’s long-term foreign policy aims: it is driving a wedge between the US and Europe and exposing big divisions within Europe itself.
At present it looks as if we’re witnessing another of the diplomatic loops that have characterised much of the year as Donald Trump has tried to make good on his pledge to end the war. The latest deal is still being thrashed out between negotiators from the US, Ukraine and its European allies. But it’s far from clear that whatever the joint talks produce will receive buy-in from the US president, whose position – as we have seen all year – can change overnight depending on whom he talks to.
What’s more clear is that Putin will almost certainly reject the plan outright. How this will play in the White House is anyone’s guess. While the US president has shown that he is susceptible to the Russian leader’s blandishments, he has also displayed a short fuse when he thinks Putin isn’t taking him seriously enough.
Looking back on the year, it’s clear that – in the sphere of international relations – pretty much all roads lead back to Donald Trump. Most of the big international stories we’ve covered have featured the US president as a key player. So it makes sense to begin a review of the past year in international affairs with the return of Trump to the White House.
Donald Trump: a politician of consequence
After Trump was elected for a second term in November 2024, James Cooper of York St John University referred to the president as an “international disruptor”. Cooper predicted that Trump’s unconventional style might yield results via the “madman theory”, which holds that his unpredictability could prove to be an effective foreign policy approach. Quite how effective remains to be seen.
Cooper also predicted that Ukraine and America’s Nato allies might find Trump’s foreign policy outlook a major concern. And so it has proved. The US has halted military aid to Ukraine, leaving Kyiv scrambling to secure reliable support from its European allies which – as we’ve seen, are struggling to secure the funds. And America’s Nato allies in Europe learned last month, when the US released its 2025 national security strategy, that they can no longer rely on the US for security in the way that they have in the eight decades since the end of the second world war.
The strategy makes for sobering reading if you live in Europe, writes Andrew Gawthorpe, a lecturer in history and international studies at Leiden University. The 33-page public document is harshly critical of what it sees as Europe’s weakness, saying the continent risks “civilizational erasure” thanks to migration.
Gawthorpe notes that Russia has welcomed the strategy as “largely consistent” and predicts that America’s allies in Asia and Europe may have to face the prospect that Trump may prefer to align the US in a “grand bargain” with Russia and China.
Despite nearly ten months elapsing, it’s hard to forget the now-notorious White House meeting at which Trump and his vice-president, J.D. Vance, lambasted Zelensky for not being grateful for the help the US had given Ukraine. All diplomatic niceties abandoned, the Americans rounded on the Ukrainian president, accusing him of “gambling with world war three” and demanding: “You either make a deal or we are out.”
The state of the conflict in Ukraine, December 16, 2025. Institute for the Study of War
Stefan Wolff and Tetyana Malyarenko reported at the time that the real issue was Trump’s desire for US firms to exploit Ukraine’s considerable mineral reserves (many of Trump’s peace deals are also business deals, as we noted in a separate article last month).
Wolff, of the University of Birmingham, and Malyarenko, of the University of Odesa, have been contributing to our coverage of the conflict between Russia and Ukraine and its geopolitical implications for more than a decade. For them, the US national security strategy was confirmation of something they have suspected for a while: that Europe will be left struggling to keep Ukraine in the fight as the continent re-arms itself in the face of the very real prospect that Putin doesn’t want to stop at Ukraine.
Our coverage of the Ukraine conflict has also been informed by Frank Ledwidge, formerly of UK military intelligence, now an expert in military strategy at the University of Portsmouth. Ledwidge is a regular visitor to Ukraine and in August contributed this vivid piece of reportage from Kharkiv, Ukraine’s “unbreakable” eastern capital.
This was the year that many western countries came off the sidelines and formally declared their recognition of Palestinian statehood. These declarations, by the UK, France, Australia and Canada, were largely symbolic. As things stand the prospect of a two-state solution remains as remote as ever. The (very tenuous) ceasefire in Gaza has not progressed further than a cessation of the wholesale killing of Palestinian civilians in the enclave.
And as Leonie Fleischmann, an expert in the Israeli-Palestinian conflict at City St Georges, University of London, reports, illegal Israeli settlements have multiplied to such an extent that they threaten to cut the West Bank in two, which – as Israel’s far-right finance minister Bezalel Smotrich noted – “buries the idea of a Palestinian state”.
In Gaza meanwhile, and despite the ceasefire, the violence continues – albeit on a smaller scale, at present. Within days of the ceasefire being signed, and notwithstanding a stipulation that Hamas must disarm and disband, the militant Palestinian group was already regrouping.
Tahani Mustafa, formerly a Palestine analyst for the international crisis group and now a lecturer in international relations at King’s College London, used her considerably range of contacts on the ground in Gaza to bring us this report.
What 2026 may hold for the people of Gaza remains uncertain. There’s been little or no progress on establishing a framework for governance in the enclave and at present Israel’s strategy seems to be to encourage as many Gaza residents as possible to leave via the Rafah crossing into Egypt.
Whether we will see the beginnings of the realisation of the Trump blueprint for the redevelopment of much of Gaza into commercial and tourism property, sometimes called the “Trump riviera”, may become clearer next year.
What is clear, though, is that whatever Israeli and its allies plan to do in Gaza, it will be critical to secure the support and cooperation of the Gulf states, without which any plan for the future of the region will be a non-starter.
Scott Lucas, a Middle East expert at the Clinton Institute, University College Dublin, has been contributing to our coverage of the region for more than a decade. As the Gaza ceasefire was announced in October, he answered our questions and underlined the vital role played by other powers in the Middle East.
The bitter conflict in Sudan has often been eclipsed this year by the wars in Ukraine and Gaza, and it’s significant that it is not among the wars the US president claims to have solved in his eleven months in power. But the regular reports of wholesale slaughter of civilians, mass rape and other war crimes have been no less terrible for that.
The conflict is often reported as an ethnic clash: Arab militias from the country’s northern provinces fighting against African groups from Sudan’s west and south. But Justin Willis of Durham University and Willow Berridge of Newcastle University – both experts in the history of the Sudan conflict – believe it’s more complicated than that and has much to do with international meddling.
But when you strip away the geopolitics, as ever, it is innocent civilians who are left to bear the lion’s share of the suffering, as is clear from this harrowing report based on interviews with refugees flooding south to escape the violence.
We’re going to take a two week break over the holiday season. The next world affairs update will be on January 8 2026. Many thanks for your support over the year.
Sign up to receive our weekly World Affairs Briefing newsletter from The Conversation UK. Every Thursday we’ll bring you expert analysis of the big stories in international relations.
In recent years, members of the Canadian public have witnessed the misrepresentation of Indigenous identities.
Recently, we learned that University of Guelph professor emeritus Thomas King is not Indigenous. The highly regarded author of literary works such as The Inconvenient Indian: A Curious Account of Native People in North America and The Back of the Turtle captured the imagination of readers interested in Indigenous experiences.
Both non-Indigenous readers, either less or more familiar with Indigenous lives, and Indigenous readers trusted and respected King. Many of us revered him.
In King, we had a source of literary representation that informed knowledge of the Indigenous experience, and inspired curiosity about who Indigenous people are — and how we might understand “their” or “our” knowledge, histories and experiences.
King’s situation is yet another in a queue of high-profile individuals such as Buffy Sainte-Marie, Carrie Bourassa and Vianne Timmons who have made dubious claims about Indigenous identities.
Some Canadian universities have begun to develop policies to address erroneous claims to indigeneity. Some have already been affected by the fallout of such cases, while others wish to mitigate potential problems of misrepresentation.
We are both “Status Indian,” who consider ourselves to have connections to our respective communities, in Kanienkeha’ka (Frank) and Wendat (Annie) territories. In our own cases, and many others, these connections are also made complicated by migration, work/life changes and relationships.
For example, community consultations at the University of Manitoba and working groups at the University of Winnipeg have provided some valuable input into the problem of false claims of Indigenous identity and potential approaches to address them.
While there are many aspects to take into consideration, policies may vary from one institution or community to another. Yet across contexts, policy development about Indigenous identity will often lead to difficult conversations.
Viewing the complexities that may exist when considering an individual’s Indigenous identity as “challenges” might adversely affect our orientations toward the exercise.
Fundamentally, the constituent elements of one’s Indigenous identity ought to be treated charitably. This approach should not be understood as a dismissal of the problems experienced when one misrepresents their identity as being Indigenous. The concern here is the impact that such dialogue has upon Indigenous people and Peoples at large.
Thus, claims to indigeneity made by those without such apparent connections must be considered carefully.
Non-material harms of false claims
While prospective policies around Indigenous identity are developed to regulate situations that would lead a person to make a false claim for material benefits — like access to funding or Indigenous-specific hiring — we believe that non-material impacts, such as community well-being and trust, should also be considered.
Although the Tribal Alliance Against Frauds highlighted the dubiousness of King’s claims to indigeneity, King has owned up publicly to his misrepresentation.
In an essay in The Globe and Mail, King shared what he had learned of his non-Cherokee ancestry, family stories shared about his darker skin, as well as the impacts that his misrepresentation has had on others.
Frank Deer receives funding from the Social Sciences and Humanities Research Council of Canada.
Annie Pullen Sansfaçon receives funding from the Canada Research Chair Program. She is member of the National Indigenous University Senior Leaders’ Association (NIUSLA)