Can scientists detect life without knowing what it looks like? Research using machine learning offers a new way

Source: The Conversation – USA – By Amirali Aghazadeh, Assistant Professor of Electrical and Computer Engineering, Georgia Institute of Technology

Many carbon-rich meteorites contain ingredients commonly found in life, but no evidence of life itself. James St. John, CC BY

When NASA scientists opened the sample return canister from the OSIRIS-REx asteroid sample mission in late 2023, they found something astonishing.

Dust and rock collected from the asteroid Bennu contained many of life’s building blocks, including all five nucleobases used in DNA and RNA, 14 of the 20 amino acids found in proteins, and a rich collection of other organic molecules. These are built primarily from carbon and hydrogen, and they often form the backbone of life’s chemistry.

For decades, scientists have predicted that early asteroids may have delivered the ingredients of life to Earth, and these findings seemed like promising evidence.

Even more surprising, these amino acids from Bennu were split almost evenly between “left-handed” and “right-handed” forms. Amino acids come in two mirror-image configurations, just like our left and right hands, called chiral forms.

On Earth, almost all biology requires the left-handed versions. If scientists had found a strong left-handed excess in Bennu, it would have suggested that life’s molecular asymmetry might have been inherited directly from space. Instead, the near-equal mixture points to a different story: Life’s left-handed preference likely emerged later, through processes on Earth, rather than being pre-imprinted in the material delivered by asteroids.

Two hands with two molecules that are mirror images of each other shown over them.
A ‘chiral’ molecule is one that is not superposable with another that is its mirror image, even if you rotate it.
NASA

If space rocks can carry familiar ingredients but not the chemical “signature” that life leaves behind, then identifying the true signs of biology becomes extremely complicated.

These discoveries raise a deeper question – one that becomes more urgent as new missions target Mars, the Martian moons and the ocean worlds of our solar system: How do researchers detect life when the chemistry alone begins to look “lifelike”? If nonliving materials can produce rich, organized mixtures of organic molecules, then the traditional signs we use to recognize biology may no longer be enough.

As a computational scientist studying biological signatures, I face this challenge directly. In my astrobiology work, I ask how to determine whether a collection of molecules was formed by complex geochemistry or by extraterrestrial biology, when exploring other planets.

In a new study in the journal PNAS Nexus, my colleagues and I developed a framework called LifeTracer to help answer this question. Instead of searching for a single molecule or structure that proves the presence of biology, we attempted to classify how likely mixtures of compounds preserved in rocks and meteorites were to contain traces of life by examining the full chemical patterns they contain.

Identifying potential biosignatures

The key idea behind our framework is that life produces molecules with purpose, while nonliving chemistry does not. Cells must store energy, build membranes and transmit information. Abiotic chemistry produced by nonliving chemical processes, even when abundant, follows different rules because it is not shaped by metabolism or evolution.

Traditional biosignature approaches focus on searching for specific compounds, such as certain amino acids or lipid structures, or for chiral preferences, like left-handedness.

These signals can be powerful, but they are based entirely on the molecular patterns used by life on Earth. If we assume that alien life uses the same chemistry, we risk missing biology that is similar – but not identical – to our own, or misidentifying nonliving chemistry as a sign of life.

The Bennu results highlight this problem. The asteroid sample contained molecules familiar to life, yet nothing within it appears to have been alive.

To reduce the risk of assuming these molecules indicate life, we assembled a unique dataset of organic materials right at the dividing line between life and nonlife. We used samples from eight carbon-rich meteorites that preserve abiotic chemistry from the early solar system, as well as 10 samples of soils and sedimentary materials from Earth, containing the degraded remnants of biological molecules from past or present life. Each sample contained tens of thousands of organic molecules, many present in low abundance and many whose structures could not be fully identified.

At NASA’s Goddard Space Flight Center, our team of scientists crushed each sample, added solvent and heated it to extract the organics — this process is like brewing tea. Then, we took the “tea” containing the extracted organics and passed it through two filtering columns that separated the complex mixture of organic molecules. Then, the organics were pushed into a chamber where we bombarded them with electrons until they broke into smaller fragments.

Traditionally, chemists use these mass fragments as puzzle pieces to reconstruct each molecular structure, but having tens of thousands of compounds in each sample presented a challenge.

LifeTracer

LifeTracer is a unique approach for data analysis: It works by taking in the fragmented puzzle pieces and analyzing them to find specific patterns, rather than reconstructing each structure.

It characterizes those puzzle pieces by their mass and two other chemical properties and then organizes them into a large matrix describing the set of molecules present in each sample. It then trains a machine learning model to distinguish between the meteorites and the terrestrial materials from Earth’s surface, based on the type of molecules present in each.

One of the most common forms of machine learning is called supervised learning. It works by taking many input and output pairs as examples and learns a rule to go from input to output. Even with only 18 samples as those examples, LifeTracer performed remarkably well. It consistently separated abiotic from biotic origins.

What mattered most to LifeTracer was not the presence of a specific molecule but the overall distribution of chemical fingerprints found in each sample. Meteorite samples tended to contain more volatile compounds – they evaporate or break apart more easily – which reflected the type of chemistry most common in the cold environment of space.

A graph showing a cluster of dots representing molecules, some in red and some in blue.
This figure shows compounds identified by LifeTracer, highlighting the most predictive molecular fragments that distinguish abiotic from biotic samples. The compounds in red are linked to abiotic chemistry, while the blue compounds are linked to biotic chemistry.
Saeedi et al., 2025, CC BY-NC-ND

Some types of molecules, called polycyclic aromatic hydrocarbons, were present in both groups, but they had distinctive structural differences that the model could parse. A sulfur-containing compound, 1,2,4-trithiolane, emerged as a strong marker for abiotic samples, while terrestrial materials contained products formed through biological process.

These discoveries suggest that the contrast between life and nonlife is not defined by a single chemical clue but by how an entire suite of organic molecules is organized. By focusing on patterns rather than assumptions about which molecules life “should” use, approaches like LifeTracer open up new possibilities for evaluating samples returned from missions to Mars, its moons Phobos and Deimos, Jupiter’s moon Europa and Saturn’s moon Enceladus.

The sample return capsule, a black box, sitting on the ground after touching down.
The Bennu asteroid sample return capsule used in the OSIRIS-REx mission.
Keegan Barber/NASA via AP

Future samples will likely contain mixtures of organics from multiple sources, some biological and some not. Instead of relying only on a few familiar molecules, we can now assess whether the whole chemical landscape looks more like biology or random geochemistry.

LifeTracer is not a universal life detector. Rather, it provides a foundation for interpreting complex organic mixtures. The Bennu findings remind us that life-friendly chemistry may be widespread across the solar system, but that chemistry alone does not equal biology.

To tell the difference, scientists will need all the tools we can build — not only better spacecraft and instruments, but also smarter ways to read the stories written in the molecules they bring home.

The Conversation

Amirali Aghazadeh receives funding from Georgia Tech.

ref. Can scientists detect life without knowing what it looks like? Research using machine learning offers a new way – https://theconversation.com/can-scientists-detect-life-without-knowing-what-it-looks-like-research-using-machine-learning-offers-a-new-way-271066

A Colorado guaranteed income program could help families, but the costs are high

Source: The Conversation – USA – By Jennifer C. Greenfield, Associate Professor of Social Work, University of Denver

Guaranteed income programs have grown in popularity in the U.S. as costs of living continue to rise. Glowimages/GettyImages Plus

In Colorado, full-time workers need to earn an hourly wage of at least $36.79 to afford $2,000 in monthly rent, which is below the federal fair market rate for a Denver-area two-bedroom unit.

More than 87% of low-income Coloradans spend more than one-third of their pretax income on housing — a common benchmark for housing affordability. High costs of housing, child care and transportation in Colorado are key drivers of a statewide cost of living that is 12% above the national average.

For many Coloradans, a few hundred extra dollars a month would go a long way. Yet today, the U.S. safety net appears more tenuous than ever and is unlikely to meet all their needs.

Nationally, over the 43-day government shutdown that began on Oct. 1, 2025, 1.4 million federal workers went without paychecks. More than 150,000 jobs were cut in the U.S. private sector in October alone.

As layoffs increase, fewer people are being hired into new positions. At the same time, the federal government shutdown put families receiving federal food assistance on an emotional roller coaster as aid was promised and then pulled away.

This recent federal funding uncertainty has resurfaced the idea of state or local programs that give people money without any strings attached.

Rise of guaranteed income programs

First proposed nationally during the Nixon administration in the 1970s, guaranteed income programs have grown more popular in the U.S.

The concept got a big boost when entrepreneur Andrew Yang proposed a $1,000 monthly stipend during his bid for the Democratic Party’s presidential nomination. Yang’s proposal called for giving all Americans money to help them deal with economic problems brought on by job losses tied to automation and new technologies.

In Colorado, both Boulder and Denver have piloted guaranteed income programs. In both cases the programs were studied using rigorous randomized-control trial research designs.

We are an academic research team comprised of a social scientist with a background in economic analysis, a social work scholar who studies policy approaches to reducing health and wealth disparities, and an urban planning scholar with expertise in state and local policy.

We were contracted to provide an independent evaluation and cost assessment of administering a statewide cash assistance program for Coloradans. Our estimates include projections for population changes, such as the aging workforce, and three tiers of support: from low, $25 per month, to medium, $100 per month, to high, $500 per month.

Rolling out a state government program that gives everyone money would be expensive, so we also estimated what it would cost to introduce a program just for the lowest-income Coloradans.

What are guaranteed income programs?

Guaranteed income programs are policies that support a population by giving people money on a regular basis — regardless of their income. They’re called universal basic income programs.

More common in practice are cash dividends. Dividends offer cash assistance to a qualifying group or segment of the population, such as people below a certain income or with a qualifying disability. An example of this is Michigan’s Rx Kids Program, which provides cash assistance for pregnant people, new parents and babies.

Guaranteed income programs can be administered at the neighborhood, city or state level. Programs in Cambridge, Massachusetts; Richmond, California; and Baltimore have all shown efficacy in targeting the needs of local communities.

For example, people who were enrolled in the Rise Up Cambridge program became more likely to be employed, get enough to eat and have housing – while making more money — than those who didn’t get cash assistance.

Most cash assistance programs have succeeded. Research by GiveDirectly and the Stanford Basic Income Project likewise find that beneficiaries of cash assistance programs are more likely to get involved in their local communities.

An ‘NBC News’ segment looks at a study of a universal basic income program. The study found that most people would spend the money on essentials like food and rent.

These programs can support people who have lost their jobs or are experiencing health crises. In Colorado, a statewide guaranteed income program could help low-income Coloradans facing high housing and child care costs.

Similarly, the program could help Colorado’s growing population of older people with fixed incomes.

It could also address fears that the rise of artificial intelligence will cause job losses and result in lower wages for many workers. Columbia Business School researchers have predicted a 5% decline in how much of the country’s total economic output goes to workers’ wages due to artificial intelligence.

Program, not panacea

While guaranteed income programs can help the people who get money from them, they are complicated, expensive and hard to administer.

Administering a guaranteed income program requires massive capacity to deploy and manage. The state would have to facilitate enrollment, keep mailing addresses or bank information updated and supervise transfers for more than 5 million Coloradans every single month. Some of this data may already exist at state agencies, but no one agency has all of this information at its disposal.

For instance, only 80% of adults, roughly 3.3 million people, in Colorado filed a tax return in 2023; only 175,000 workers filed a Family and Medical Leave Insurance claim in 2024; and just about 1 million adults are enrolled in Health First Colorado, the state’s Medicaid program. Even merging data across these agencies — an effort that is underway but is just getting started — would miss some households across the state.

A large building with a gold dome on a sunny day behind a green lawn.
It would cost more than half of Colorado’s annual general fund to give $100 a month to every Coloradan as part of a statewide income program.
Jan Butchofsky/GettyImages

In a world of finite budgets, a statewide universal program would have to be smaller per person, limiting its benefits. Giving all Colorado residents $100 per month would cost more than $7 billion each year. That’s more than half of Colorado’s annual general fund. However, it would cost half as much — $3.3. billion — to provide $500 per month to the 554,000 Coloradans who are below the federal poverty line, which is $32,150 for a family of five.

Finding this money within the state budget could require cutting spending elsewhere — potentially from other state-funded programs that benefit low-income families.

Trade-offs for policymakers

If federal food assistance, including the Supplemental Nutrition Assistance Program, is disrupted again, either by more funding freezes or new changes in eligibility rules, a statewide supportive assistance program could help offset the impact.

In 2024, the average American getting SNAP benefits received $6.11 per day, or less than $200 a month. One in 10 Coloradans, 584,500 people, receive SNAP benefits.

However, a guaranteed income program might risk pushing some households’ income above the eligibility cutoff for programs like SNAP — creating unintended consequences that harm household welfare. It’s unclear whether assistance from a basic income program would count as reportable income.

Where AI-driven job loss is concerned, guaranteed income programs could smooth transitions for laid-off workers needing to upskill or move industries. However, guaranteed income programs are not likely to be sufficient in scope or generous enough to cushion workers from a potential restructuring of the labor market, which may have already begun.

Assessing public support

Given the high costs of creating a statewide guaranteed income program for Colorado, getting substantial public buy-in would be necessary.

Children stand in front of a cafeteria line of food.
In 2025, Colorado voters passed legislation to fund a free lunch program for all students regardless of family income.
Helen H. Richardson/GettyImages

Recent election results, in which voters approved a new tax to fund free school meals for all students, suggest that Coloradans can support programs that help the most vulnerable families.

A recent privately funded poll in Colorado, which was informed by our evaluation’s estimates, found that 56% of voters would support a monthly $500 payment for all new parents, people experiencing homelessness, and low-income households. The poll found that Coloradans were less likely to support a program providing a smaller stipend to all Coloradans, regardless of their income.

Taken together, these polling results suggest that many Coloradans would support some form of need-based income assistance. However, the price of operating any statewide guaranteed income program could give them sticker shock.

Read more of our stories about Colorado.

The Conversation

Jennifer C. Greenfield was hired by Thinking Forward, LLC and the Denver Basic Income Project as a consultant to provide cost estimates and analysis of a potential cash dividend program in Colorado, as described in this article.

Kaitlyn M. Sims receives funding from the Wisconsin Department of Children and Families, the Arnold Ventures Foundation, and the Institute for Humane Studies. She was contracted by Thinking Forward, LLC, and the Denver Basic Income Project to provide a cost-benefit assessment of a statewide cash dividend for the state of Colorado.

Stefan Chavez-Norgaard was contracted by Thinking Forward, LLC, to provide a cost-benefit analysis and broad assessment of a statewide cash dividend program for the State of Colorado. He has also connected with organizations mentioned in this article, including the Denver Basic Income Project (DBIP) and the Fund 4 Guaranteed Income, supporter of the Compton Pledge.

ref. A Colorado guaranteed income program could help families, but the costs are high – https://theconversation.com/a-colorado-guaranteed-income-program-could-help-families-but-the-costs-are-high-269082

Trump administration replaces America 250 quarters honoring abolition and women’s suffrage with Mayflower and Gettysburg designs

Source: The Conversation – USA – By Seth T. Kannarr, Ph.D. Candidate in Geography, University of Tennessee

Coins convey important messages about what it means to be an American; the White House knows this. Max Zolotukhin, iStock/Getty Images Plus

The culture wars have arrived at the U.S. Mint.

Commemorative coins aimed at celebrating America’s 250th anniversary in 2026 were unveiled by the mint on Dec. 10, 2025, and they reflect the country’s currently divided politics and views of history.

In an unexpected move, most of the original designs for the “America 250” coins that were approved by two official committees in 2024 were abandoned and replaced. Most notably, the Black Abolition, Women’s Suffrage and Civil Rights quarters were replaced with quarters that instead commemorate the Mayflower Compact, Revolutionary War and the Gettysburg Address.

As a cultural geographer and coin collector, I believe the release of these new dimes, quarters and half-dollars offers a reminder that coins, despite their small size, share important messages about what it means to be an American.

This isn’t the first time politics has invaded the design of U.S. coins. The history contained in their designs is often negotiated and politicized, which is manifested into coins as public memory.

From Congress to your pocket

The production of these America 250 coins, part of the celebration formally referred to as the “American Semiquintennial,” was authorized by the Circulating Collectible Coin Redesign Act of 2020, which was signed into law by President Donald Trump in January 2021.

This reflects the long-standing formal process for designing and producing U.S. coins, both regular circulating ones and commemorative ones.

First, Congress calls for the production of new coins. Then, design ideas and draft art are solicited from medallic artists at the U.S. Mint, who create the raised, three-dimensional designs that are sculpted into models.

Two groups – the Citizens Coinage Advisory Committee, which exists to advise the U.S. Secretary of the Treasury on the designs of all U.S. coins and medals, and the federal Commission of Fine Arts, which provides advice to the federal government on matters of design and aesthetics, including memorials, buildings and coins – work together over time, including through public meetings, to review proposed designs and recommend revisions and selections of specific designs.

The recommendations of the advisory committee and the commission have in the past proved valuable to shaping the final depictions portrayed in coin engravings, but the final authority and decisions come from the Secretary of the Treasury.

In the case of the America 250 coins, the designs were discussed across multiple meetings in 2024, with the final report from the Commission of Fine Arts published on Oct. 24, 2024.

The final recommendations were for a dime that bears a “Liberty Over Tyranny” design; five quarters that would have the “Declaration of Independence,” “U.S. Constitution,” “Abolitionism,” “Suffrage” and “Civil Rights” as their respective designs; and a half-dollar that would bear a “Participatory Democracy” design.

Why the big switch?

The original dime and half-dollar images remained unchanged in the officially accepted designs unveiled on Dec. 10, 2025. However, all quarter designs were changed, eliminating the proposed images representing the Declaration of Independence, U.S. Constitution, Abolitionism, Suffrage and Civil Rights, with the exception of the reverse side of the Declaration of Independence quarter.

No official explanation for these changes were provided during the U.S. Mint’s design unveiling event. But it is not hard to see how the nation’s current political climate, in which President Donald Trump has complained that the Smithsonian focuses too much on “how bad slavery was” and not enough on the “brightness” of the country’s history, may have played a role.

This is significant for two primary reasons. One, the process for choosing the design was supposed to reflect public input, via the public meetings with the two advisory committees regarding these changes. But these fundamental changes were ultimately decided by the Secretary of the Treasury out of the public eye, likely in concert with other members of the Trump administration.

Second, these changes of the America 250 quarters reinforce a more traditional and exclusionary view of nation’s founding and continued progress. The new designs sideline Americans’ historical struggle against oppression and social injustice and are demonstrative of the Trump administration’s collective efforts to bar government statements and initiatives related to diversity, equity and inclusion.

The selective editing of American memory portrayed on the America 250 coins is not only a breach in established process, but it’s also a missed opportunity to provide new and diverse representation in an easy, yet meaningful, way.

Public memory in your pocket

Ever since the U.S. Mint opened in Philadelphia in 1792, coins and currency with depictions of American figures, symbolic representations and iconic inscriptions have circulated throughout the nation and the world.

For example, the Fifty States Quarters program, which ran from 1999 to 2008, was very popular among Americans who appreciated seeing different designs on quarters that were emblematic of their own state’s identity. For example, the Vermont version of the quarter included an image of Camel’s Hump Mountain and maple trees with sap buckets hung on them.

Scholars have argued that coins and currency are examples of everyday or banal nationalism, which refers to the often unnoticed expressions of national identity that persist throughout material culture and society.

Coins occupy sparing yet evident moments throughout our lives. You can find them in routine places, with little attention given to their presence, such as the bottom of your junk drawer, in the cup holder in your car or abandoned on the sidewalk.

A woman's hand holding coins.
What coins do you have in your pocket?
Grace Cary, Getty Images

To cultural geographers like me, coins serve as vessels of passive and active public memory. They subtly signal values and reinforce figures and events as important to American culture and history by being portrayed on government-issued coins.

This understanding further highlights the significance of the recent design changes to the America 250 coins. The removal of imagery of women, people of color and historic events important to marginalized people are not subtle choices.

Whether someone is an active coin collector or just looking to buy a candy bar at a convenience store, all people participate in the reproduction of American public memory. And they do this regardless of which narratives of public memory are chosen to be shared by the federal government.

What comes next?

Recent controversies regarding the end of production of the U.S. penny and the proposal for a new one-dollar coin commemorating President Donald Trump illustrate the American public’s continued interest and attention to coins and currency despite an increasingly digital age. The redesign of these America 250 coins is yet another story in this ongoing saga.




Read more:
Who wins and who loses as the US retires the penny


Historically, designs of coins or currency that are unpopular with the general public are ripe for being defaced, such as the scratching out of public figures or the complete destruction of the piece.

Although sometimes illegal, such an act sends a powerful political message of subversion against the government. This tends to be more common in other nations, beyond minor graffiti drawn onto paper currency in the U.S.

If the U.S. Mint maintains the product schedule of previous years, the America 250 coins should begin to circulate in February 2026. It may take time for the coins to arrive at banks, and even longer for them to show up as change from grocery stores, convenience shops and beyond.

Whether you believe in the appropriateness of the new designs or not, the coins and their backstory can serve as a prompt for discussion with friends and family, or even educating children, about what it means to be an American. The power – and the coins – will soon be in your hands.

The Conversation

Seth T. Kannarr does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Trump administration replaces America 250 quarters honoring abolition and women’s suffrage with Mayflower and Gettysburg designs – https://theconversation.com/trump-administration-replaces-america-250-quarters-honoring-abolition-and-womens-suffrage-with-mayflower-and-gettysburg-designs-271811

Sharks and rays get a major win with new international trade limits for 70+ species

Source: The Conversation – USA (2) – By Gareth J. Fraser, Associate Professor of Evolutionary Developmental Biology, University of Florida

Watching a whale shark swim at the Georgia Aquarium. Zac Wolf/Wikimedia Commons, CC BY-SA

The world’s oceans are home to an exquisite variety of sharks and rays, from the largest fishes in the sea – the majestic whale shark and manta rays – to the luminescent but rarely seen deep-water lantern shark and guitarfishes.

The oceans were once teeming with these extraordinary and ancient species, which evolved close to half a billion years ago. However, the past half-century has posed one of the greatest tests yet to their survival. Overfishing, habitat loss and international trade have cut their numbers, putting many species on a path toward extinction within our lifetimes.

Scientists estimate that 100 million (yes, million) sharks and rays are killed each year for food, liver oil and other trade.

The volume of loss is devastatingly unsustainable. Overfishing has sent oceanic shark and ray populations plummeting by about 70% globally since the 1970s.

A manta ray gliding with fish.
A manta ray’s wingspan can be 12 to 22 feet, and some giant ocean rays can grow even larger.
Jon Hanson/Flickr, CC BY-SA

That’s why countries around the world agreed in December 2025 to add more than 70 shark and ray species to an international wildlife trade treaty’s list for full or partial protection.

It’s an important move that, as a biologist who studies sharks and rays, I believe is long overdue.

Humans put shark species at risk of extinction

Sharks have had a rough ride since the 1970s, when overfishing, habitat loss and international trade in fins, oil and other body parts of these enigmatic sea dwellers began to affect their sensitive populations. The 1975 movie “Jaws” and its portrayal of a great white shark as a mindless killing machine didn’t help people’s perceptions.

One reason shark populations are so vulnerable to overfishing, and less capable of recovering, is the late timing of their sexual maturity and their low numbers of offspring. If sharks and rays don’t survive long enough, the species can’t reproduce enough new members to remain stable.

Losing these species is a global problem because they are vital for a healthy ocean, in large part because they help keep their prey in check.

The bowmouth guitarfish, shown here at the Shedd Aquarium in Chicago, is considered critically endangered.

Endangered and threatened species listings, such as the International Union for Conservation of Nature’s Red List, can help draw attention to sharks and rays that are at risk. But because their populations span international borders, with migratory routes around the globe, sharks and rays need international protection, not just local efforts.

That’s why the international trade agreements set out by the Convention of International Trade in Endangered Species, or CITES, are vital. The convention attempts to create global restrictions that prevent trade of protected species to give them a chance to survive.

New protections for sharks and rays

In early December 2025, the CITES Conference of the Parties, made up of representatives from 184 countries, voted to initiate or expand protection against trade for many species. The votes included adding more than 70 shark and ray species to the CITES lists for full or restricted protection.

The newly listed or upgraded species include some of the most charismatic shark and ray species.

The whale shark, one of only three filter-feeding sharks and the largest fish in the ocean, and the manta and devil rays have joined the list that offers the strictest restrictions on trade, called Appendix I. Whale sharks are at risk from overfishing as well as being struck by ships. Because they feed at the surface, chasing zooplankton blooms, these ocean giants can be hit by ships, especially now that these animals are considered a tourism must-see.

A manta ray swims with its mouth open. You can see the gill structure inside
Manta rays are filter feeders. Their gills strain tiny organisms from the water as they glide.
Gordon Flood/Flickr, CC BY

Whale sharks now join this most restrictive list with more well-known, cuddlier mammals such as the giant panda and the blue whale, and they will receive the same international trade protections.

The member countries of CITES agree to the terms of the treaty, so they are legally bound to implement its directives to suspend trade. For the tightest restrictions, under Appendix I, import and export permits are required and allowed only in exceptional circumstances. Appendix II species, which aren’t yet threatened but could become threatened without protections, require export permits. However, the treaty terms are essentially a framework for each member government to then implement legislation under national laws.

Another shark joining the Appendix I list is the oceanic whitetip shark, an elegant, long-finned ocean roamer that has been fished to near extinction. Populations of this once common oceanic shark are down 80% to 95% in the Pacific since the mid-1990s, mostly due to the increase in commercial fishing.

A large shark with several stripped fish swimming with it.
An oceanic whitetip shark (Carcharhinus longimanus) swims with pilot fish. Whitetip sharks are threatened in part by demand for their fins and being caught by commercial fisheries.
NOAA Fisheries

Previously the only sharks or rays listed on Appendix I were sawfish, a group of rays with a long, sawlike projection surrounded by daggerlike teeth. They were already listed as critically endangered by the IUCN’s Red List, which assesses the status of threatened and endangered species, but it was up to governments to propose protections through CITES.

Other sharks gaining partial protections for the first time include deep-sea gulper sharks, which have been prized for their liver oil used for cosmetics. Gulper shark populations have been decimated by unsustainable fishing practices. They will now be protected under Appendix II.

Gulper sharks are long, slim, deep-water dwellers, typically around 3 to 5 feet long.
D Ross Robertson/Smithsonian via Wikimedia Commons

Appendix II listings, while not as strong as Appendix I, can help populations recover. Great white shark populations, for example, have recovered since the 1990s around the U.S. after being added to the Appendix II list in 2005, though other populations in the northwest Atlantic and South Pacific are still considered locally endangered.

Tope and smooth-hound sharks were also added to the Appendix II list in 2025 for protection from the trade of their meat and fins.

Several species of guitarfishes and wedgefishes, odd-shaped rays that look like they have a mix of shark and ray features and have been harmed by local and commercial fishing, finning and trade, were assigned a CITES “zero-quota” designation to temporarily curtail all trade in their species until their populations recover.

A fish with a triangular head and long body that looks like a mix between a ray and a shark.
An Atlantic guitarfish (Rhinobatus lentiginosus) swims in the Gulf of Mexico.
SEFSC Pascagoula Laboratory; Collection of Brandi Noble/Flickr, CC BY

These global protections raise awareness of species, prevent trade and overexploitation and can help prevent species from going extinct.

Drawing attention to rarely seen species

Globally, there are about 550 species of shark today and around 600 species of rays (or batoids), the flat-bodied shark relatives.

Many of these species suffer from their anonymity: Most people are unfamiliar with them, and efforts to protect these more obscure, less cuddly ocean inhabitants struggle to draw attention.

So, how do we convince people to care enough to help protect animals they do not know exist? And can we implement global protections when most shark-human interactions are geographically limited and often support livelihoods of local communities?

Increasing people’s awareness of ocean species at risk, including sharing knowledge about why their numbers are falling and the vital roles they play in their ecosystem, can help.

The new protections for sharks and rays under CITES also offer hope that more global regulations protecting these and other shark and rays species will follow.

The Conversation

Gareth J. Fraser is an Associate Professor at the University of Florida, and receives funding from the National Science Foundation (NSF).

ref. Sharks and rays get a major win with new international trade limits for 70+ species – https://theconversation.com/sharks-and-rays-get-a-major-win-with-new-international-trade-limits-for-70-species-271386

Data centers need electricity fast, but utilities need years to build power plants – who should pay?

Source: The Conversation – USA (2) – By Theodore J. Kury, Director of Energy Studies, University of Florida

Data centers need lots of power – but how much, exactly? alacatr/iStock/Getty Images Plus

The amount of electricity data centers use in the U.S. in the coming years is expected to be significant. But regular reports of proposals for new ones and cancellations of planned ones mean that it’s difficult to know exactly how many data centers will actually be built and how much electricity might be required to run them.

As a researcher of energy policy who has studied the cost challenges associated with new utility infrastructure, I know that uncertainty comes with a cost. In the electricity sector, it is the challenge of state utility regulators to decide who pays what shares of the costs associated with generating and serving these types of operations, sometimes broadly called “large load centers.”

States are exploring different approaches, each with strengths, weaknesses and potential drawbacks.

A new type of customer?

For years, large electricity customers such as textile mills and refineries have used enough electricity to power a small city.

Moreover, their construction timelines were more aligned with the development time of new electricity infrastructure. If a company wanted to build a new textile mill and the utility needed to build a new gas-fired power plant to serve it, the construction on both could start around the same time. Both could be ready in two and a half to three years, and the textile mill could start paying for the costs necessary to serve it.

Modern data centers use a similar amount of electricity but can be built in nine to 12 months. To meet that projected demand, construction of a new gas-fired power plant, or a solar farm with battery storage, must begin a year – maybe two – before the data center breaks ground.

During the time spent building the electrical supply, computing technology advances, including both the capabilities and the efficiency of the kinds of calculations artificial intelligence systems require. Both factors affect how much electricity a data center will use once it is built.

Technological, logistical and planning changes mean there is a lot of uncertainty about how much electricity a data center will ultimately use. So it’s very hard for a utility company to know how much generating capacity to start building.

A large industrial site with two tall smokestacks.
Keeping older coal plants running may be an expensive way to generate power.
Ulysse Bellier/AFP via Getty Images

Handling the risks of development

This uncertainty costs money: A power plant could be built in advance, only to find out that some or all of its capacity isn’t needed. Or no power plant is built, and a data center pops up, competing for a limited supply of electricity.

Either way, someone needs to pay – for the excess capacity or for the increased price of what power is available. There are three possible groups that might pay: the utilities that provide electricity, the data center customers, and the rest of the customers on the system.

However, utility companies have largely ensured their risk is minimal. Under most state utility-regulation processes, state officials review spending proposals from utility companies to determine what expenses can be passed on to customers. That includes operating expenses such as salaries and fuel costs, as well as capital investments, such as new power plants and other equipment.

Regulators typically examine whether proposed expenses are useful for providing service to customers and reasonable for the utility to expect to incur. Utilities have been very careful to provide their regulators with evidence about the costs and effects of proposed data centers to justify passing the costs of proposed investments in new power plants along to whomever the customers happen to be.

Regulators, then, are left to equitably allocate the costs to the prospective data center customers and the rest of the ratepayers, including homes and businesses. In different states, this is playing out differently.

Kentucky’s approach to usefulness

Kentucky is attempting to address the demand uncertainty by conditionally approving two new natural gas-fired generators in the state. However, the utility companies – Louisville Gas & Electric and Kentucky Utilities – must demonstrate that those plants will actually be needed and used. But it’s not clear how they could do that, especially considering the time frames involved.

For instance, suppose the utility has a letter of agreement or even a contract with a new data center or other large customer. That might be sufficient proof for the regulator to approve charging customers for the costs of building a new power plant.

But it’s not clear what would happen if the data center ends up not being built, or needing much less power than expected. If the utility can’t get the money from the data center company – because they bill customers based on actual usage – that leaves regular consumers on the hook.

A large rectangular building.
A data center in Columbus, Ohio, is just one of many being built or proposed around the country.
Eli Hiller/For The Washington Post via Getty Images

Ohio’s ‘demand ratchet’ and credit guarantee

In Ohio, the major power company AEP has a specific rate plan for data centers and other large electricity customers. One element, called a “demand ratchet,” is designed to mitigate month-to-month uncertainty in electricity consumption by data centers. The data center’s monthly bill is based on the current month’s demand or 85% of the highest monthly demand from the previous 11 months – whichever is higher.

The benefit is that it protects against a data center using huge amounts of electricity one month and very little the next, which would otherwise yield a much lower bill. The ratchet helps ensure that the data center is paying a significant share of the cost of providing enough electricity, even if it doesn’t use as much as was expected.

This ratchet effectively locks in the data center’s payments for 12 months, but regulators might expect a longer commitment from the center. For instance, Florida’s utilities regulator has approved an agreement that would require a data center company to pay for 70% of the agreed-upon demand in their entire electricity contract, even if the company didn’t use the power.

Another aspect of Ohio’s approach addresses the risk of changing business plans or technology. AEP requires a credit guarantee, like a deposit, letter of credit or parent company guarantee of payment, equal to 50% of the customer’s expected minimum bill under the contract. While this theoretically reduces the risk borne by other customers, it also raises concerns.

For example, a utility may not end up signing contracts directly with a large, well-known, wealthy technology company but with a subsidiary corporation with a more generic name – imagine something like “Westside Data Center LLC” – created solely to build and operate one data center. If the data center’s plans or technology changes, that subsidiary could declare bankruptcy, leaving the other customers with the remaining costs.

Harnessing strength in flexibility

A key advantage to these new types of customers is that they are extremely nimble in the way they use electricity.

If data centers can make money based on their flexibility, as they have in Texas, then a portion of those profits can be returned to the other customers that shared the investment risk. A similar mechanism is being implemented in Missouri: If the utility makes extra money from large customers, then 65% of that revenue increase is returned to the other customers.

Change is coming to the U.S. electricity system, but nobody is sure how much. The methods by which states are trying to allocate the cost of that uncertainty vary, but the critical element is understanding their respective strengths and weaknesses to craft a system that is fair for everyone.

The Conversation

Theodore Kury is the Director of Energy Studies at the University of Florida’s Public Utility Research Center, which is sponsored in part by the Florida electric and gas utilities, the Florida Public Service Commission and the Office of Public Counsel, the Consumer Advocate for the State. However, the Center maintains sole editorial control of this and any other work.

ref. Data centers need electricity fast, but utilities need years to build power plants – who should pay? – https://theconversation.com/data-centers-need-electricity-fast-but-utilities-need-years-to-build-power-plants-who-should-pay-271048

How I rehumanize the college classroom for the AI-augmented age

Source: The Conversation – USA (2) – By Sean Cho Ayres, Assistant Professor of English – AI Writing, Kennesaw State University

Generative AI looms widely in higher education. Can focusing on social interactions prepare students well for an AI-infused workplace? Fuse via Getty Images

It’s week one of the semester, the first day of class: 20 students, mostly freshmen, sit silently waiting for our English 101 Writing Composition class to begin. Most have one AirPod in listening to whatever their Spotify AI DJ thinks they want to listen to; some scroll past AI-selected ads for drop-shipped fast fashion. And then someone who has forgotten to silence their phone opens TikTok and the 6-7 second sound blares. They hurriedly close the app, no apology, not even a half-hearted laugh from their classmates.

Welcome to the contemporary college classroom.

I am a college professor working at the intersection of humanities and artificial intelligence, and yes, I believe the latter not only threatens to devalue college, but it also risks stripping humanity from our lives altogether.

It doesn’t have to be this way. AI automating away parts of work and life challenges the next generation of the workforce to re-instill the importance of interpersonal social skills, and I see the college classroom as the ideal place for this rehumanization to take place.

Here’s my framework for building a classroom centered around student socialization. The goal: Equip students with the vital human skills needed in the AI-augmented workforce.

Target: Bring humanity to work

Young adults sit in college classes fully aware that an AI-infused workplace is just on the other side of graduation. But they – and everyone else – have little idea how best to prepare for it.

How to make this work for today’s college students? Known for the infamous Gen Z stare, having their faces glued to their screens, and their fidgeting, doomscrolling thumbs, Gen Z has been pegged as the generation that lacks the social skills needed to succeed in an AI-augmented workforce.

To me, this represents a clear tension between the young adults they are and the adults they need to be.

It’s easy for my rhetoric to give off “kids these days” vibes. But I’m a young millennial. Which is to say, I too don’t know what to do with my hands at dinner parties and have to make a conscious effort to maintain eye contact.

Simply put: I teach what I wish I would have been taught.

Shifting the mentality of the classroom

In the college classroom, it’s all too easy to talk at the students for 90 minutes – to just be a professor with a slide deck who tosses in a few canned jokes that you know work because you’ve already said them a dozen times. Time passes, and you hear the next class waiting outside the door.

“All right, y’all,” you say. “Let’s get outta here.”

The students dash off to their dorm rooms or dining hall, and wait to do the homework until midnight. You wait a few weeks too long to grade it – also at midnight, right before midterm grades are due – like two digital ships passing each other in the moonlight.

Instead, I offer a different mindset: The classroom is not some intermediary between two computers – the assignment creator and the assignment doer, which only serves to build an “us versus them” mentality between student and professor.

Rather, it’s us together in the battle against the midterm or final exam.

“OK, that sounds great, random guy on the internet,” I hear you say from the other side of the screen. “But how?”

Small social interactions

We academics like to use fancy words phrases like “student-centered classroom” or “student-driven approach.” What this means for me is simple: I constantly interact with the students and make social interactions integral to the classroom experience.

I used to hear professors brag about knowing each of their students’ names, so I made it a priority to do the same. But now I don’t think that’s enough. Instead, I’m asking the frat bros-future-businessmen and the honors-society-students-soon-to-be-doctors to get to know each other as peers and future colleagues.

As I shuffle into class and try to remember if I capitalized my first pet’s name as I log into the computer, I simply ask students to tell each other: What was the most challenging question on the homework? What did you do this weekend? And more importantly, what did you wish you did?

At the end of class, I give five minutes for students to plan out when they’re going to complete the homework, and then I have them talk to the person next to them about it.

These conversations often lead to friendships formed over common struggles: Alex would love to do his English paper tonight but has to study for his bio test, and Professor Smith’s exams are the worst. As luck has it, James is also in the lecture. “Man, you’re in the class too? Where do you sit? Professor Smith talks way too fast!”

Three female college students work together at a computer.
Social interactions in class can be a vital place to teach crucial social skills.
Visual Vic via Getty Images

Centering the importance of public speaking

Sure, in my writing-intensive classes we turn in term papers, they get grades, and yes, some students use AI. That’s all fine and well, but that’s not the important part. Instead, I’m interested in students knowing the material well enough to articulate it to the group – well enough to tell us why the subject matters to them, to us and to the world at large.

So we spend a week where students give a short 5-10 minute presentation on their work. “Tell us why fast fashion is destroying the planet. Tell me why we need to care more about the future of pork and factory farming practices.”

And for those brief moments of positive peer pressure as the students stand at the front of the class, it doesn’t matter that ChatGPT helped with the commas, did the googling or even wrote the entire conclusion because “I was just getting too tired.” What matters is the students’ ability to look a group of 20 peers in the eye and bring the private work of thinking, writing and sometimes even chatbot-prompting into the public sphere.

The point isn’t whether students used AI to compose the words; it’s whether the ideas feel like they originate from the person behind the words. Whether they’ve wrestled with them long enough to know what they’re trying to say. If ChatGPT helped them get there, fine. What matters is what they did after. Did they question it? Did they revise it? Did they decide it wasn’t quite right and try again?

That’s the work I care about. To me, it’s the difference between turning something in and actually turning something over — in your mind, in your hands, to the people around you. That’s what makes it real. What makes it theirs. What makes it college.

Back in the classroom …

It’s week 12. I just sent my students off into a small-group discussion on “the value of adapting AI-augmented practices into your daily life.” Five minutes go by. “All right, y’all, let’s bring it back in.” But no one stops talking.

And in that small moment as I pull my phone out to play the Snapchat notification sound, Rizzlord soundtrack or whatever the sound meme of the day is to get their attention, I know I’ve done my small part as an educator: teaching students how to be human again.

The Conversation

Sean Cho Ayres does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How I rehumanize the college classroom for the AI-augmented age – https://theconversation.com/how-i-rehumanize-the-college-classroom-for-the-ai-augmented-age-269168

Donor-advised funds have more money than ever – and direct more of it to politically active charities

Source: The Conversation – USA (2) – By Brian Mittendorf, Professor of Accounting, The Ohio State University

Using investment accounts for charitable gifts could be influencing giving in unexpected ways. sesame/DigitalVision Vectors via Getty Images

Charitable giving in the United States has changed significantly in recent years.

Two of the biggest changes are the swift growth of donor-advised funds and the increasingly blurred lines between charity and politics.

Donor-advised funds, or DAFs, are charitable investment accounts. After donors put money or other financial assets into these accounts, the assets are technically no longer theirs. But they do get a say in how those funds are invested, as well as when and which charities should get some of the money.
Americans gave nearly US$90 billion to DAFs in 2024 – up from the $20 billion DAFs took in a decade earlier.

One distinguishing feature about DAF donors is that when they dispatch money from their charitable accounts, they fund politically engaged charities at higher rates than people who give directly to charity.

That’s what we, two scholars who research the flow of money between donors and nonprofits, found when we conducted a study examining the links between donor-advised funds and donations to charities that are politically active. Our results will be published in a forthcoming issue of Nonprofit Policy Forum, a peer-reviewed academic journal.

Resembling family foundations

As charitable investment accounts, donor-advised funds straddle a middle ground between family foundations and organizations doing direct charitable work.

Like foundations, DAFs give donors a sense of long-term control over funds they’ve designated for charitable spending in the future. But because DAFs are accounts held within certified public charities, often those affiliated with financial institutions like Fidelity and Vanguard, they offer added tax benefits and simplicity.

DAFs let donors take charitable tax deductions immediately, and then decide later how much of that money to give to which charity – and when – by telling the account managers what to do.

Timing gifts this way can increase the tax advantages tied to charitable giving through tax deductions. And DAFs help donors do this without the expenses, staffing and complexity of running their own foundations.

These advantages – coupled with persuasive marketing campaigns – have helped spur a DAF boom. Donor-advised funds held $326 billion in 2024.

A proponent of donor-advised funds explains how they work and why many donors like to use them.

More politically engaged charitable activity

We consider charities “politically engaged” if they either do lobbying or have related organizations that participate in political campaigning. These groups span the political spectrum: For example, they include both the National Rifle Association and the Environmental Defense Fund.

We gathered data from the nearly 250,000 charities in the U.S. that filed a 990 form with the Internal Revenue Service from 2020 to 2022. The country’s largest charities must file these informational forms annually and make them available to the public.

When we crunched the numbers, we discovered that nearly 6% of payments from DAF accounts go to politically engaged charities. In comparison, other funding sources paid out only 3.6% to politically engaged charities.

This means a funding rate from DAFs is 1.7 times the benchmark level. When it comes to fringe hate and antigovernment charities, overall funding levels are low, but the DAF difference is more pronounced – DAF donors fund these groups at a rate 3.5 times that of other donors.

Giving donors more privacy

One other advantage DAFs offer donors is that they provide more anonymity than if donors give to a cause directly.

Under current disclosure rules, when donors give more than $5,000 to any charitable nonprofit – whether it’s their local food bank or animal shelter or art museum – both the charity and the IRS have to know who they are. When donors give that much to private foundations, it becomes part of the public record as well.

But when donors give any amount, even if it’s much more than $5,000, through their DAFs, even the charity that ultimately gets the money may not know the donor’s identity.

This anonymity may be one reason donors more often use DAFs to give to organizations that engage in politics, either directly or indirectly.

To be sure, charities are permitted to engage in different types of political engagement to varying degrees. In fact, U.S. charities have long been important public policy advocates. And it is also understandable that donors might want to be anonymous. Yet the use of DAFs to provide gifts to fringe groups suggests this lack of transparency is not always a good thing.

The rules around donor disclosure were originally set up to prevent private interests from abusing the system.

This is the reason that foundations – like those set up by tech billionaire Elon Musk or Google co-founder Larry Page – must publicly disclose both their major donors and their grant recipients.

But when these foundations make grants to donor-advised funds, the digital trail becomes a dead end. The public has no way to know which charities the foundations are ultimately funding with their grants after the money enters a DAF’s coffers.

Consistent with this arrangement, we found that the DAFs that get more grants from foundations tend to fund politically active organizations at higher rates.

Changing the charity landscape

As DAFs continue to expand, further research can help cast light on what effect they will ultimately have. Though much research and many proposed new rules have focused on whether Americans need to move the money in their DAFs out to charities more quickly, we’re focused on where that money goes.

In examining tax filings, we have also learned that some charitable sectors get more money from DAFs than others.

For example, social service nonprofits, which include homeless shelters and food banks, get 25% of all giving, but only 20% of DAF giving. This may seem like a small difference, but it can actually represent seismic shifts in where charitable dollars go.

And we’re now examining whether the size of a charity’s DAF program can influence that organization’s behavior. The data collected from 990 forms suggests that even community foundations may become less focused on their local communities when they court DAF donors.

The Conversation

Helen Flannery is employed by the Institute for Policy Studies, a progressive think tank that has done research related to charity reforms.

Brian Mittendorf does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Donor-advised funds have more money than ever – and direct more of it to politically active charities – https://theconversation.com/donor-advised-funds-have-more-money-than-ever-and-direct-more-of-it-to-politically-active-charities-270758

Whether Netflix or Paramount buys Warner Bros., entertainment oligopolies are back – bigger and more anticompetitive than ever

Source: The Conversation – USA (2) – By Matthew Jordan, Professor of Media Studies, Penn State

Warner Bros. was one of five studios that joined forces with Wall Street investors to gobble up independent theaters and movie producers in the 1920s. Nextrecord Archives/Getty Images

News of Netflix’s bid to buy Warner Bros. last week sent shock waves through the media ecosystem.

The pending US$83 billion deal is being described as an upending of the existing entertainment order, a sign that it’s now dominated by the tech platforms rather than the traditional Hollywood power brokers.

As David Zaslav, CEO of Warner Bros. Discovery, put it, “The deal with Netflix acknowledges a generational shift: The rules of Hollywood are no longer the same.”

Maybe so. But what are those rules? And are they being rewritten, or will moviegoers and TV audiences simply find themselves back in the early 20th century, when a few powerful players directed the fate of the entertainment industry?

The rise of the Hollywood oligopolies

As Hollywood rose to prominence in the 1920s, theater chain owner Adolf Zuker spearheaded a new business model.

Cartoon of man straddling three different horses and cracking them with a whip.
Lew Merrell’s 1920 cartoon for Exhibitors Herald, a film industry trade publication, depicts Adolf Zukor performing the feat of vertical integration.
Wikimedia Commons

He used Wall Street financing to acquire and merge his film distribution company, Famous Players-Lasky, the film production company Paramount and the Balaban and Katz chain of theaters under the Paramount name. Together, they created a vertically integrated studio that would emulate the assembly line production of the auto industry: Films would be produced, distributed and shown under the same corporate umbrella.

Meanwhile, Harry, Albert, Sam and Jack Warner – the Warner brothers – had been pioneer theater owners during the nickelodeon era, the period from roughly 1890 to 1915, when movie exhibition shifted from traveling shows to permanent, storefront theaters called nickelodeons.

They used the financial backing of investment bank Goldman Sachs to follow Zucker’s Hollywood model. They merged their theaters with several independent production companies: the Vitagraph film distribution company, the Skouras Brothers theater chain and, eventually, First National.

But the biggest of the Hollywood conglomerates was Metro-Goldwyn-Mayer, created when the Loews theater chain merged Metro Pictures, Goldwyn Pictures and Mayer Pictures.

At its high point, MGM had the biggest stars of the day under noncompete contracts and accounted for roughly three-quarters of the entire industry’s gross revenues.

By the mid-1930s, a handful of vertically integrated studios dominated Hollywood – MGM, Paramount, Warner Brothers, RKO and 20th Century Fox – functioning like a state-sanctioned oligopoly. They controlled who worked, what films were made and what made it into the theaters they owned. And though the studios’ holdings came and went, the rules of the industry remained stable until after World War II.

Old Hollywood loses its cartel power

In 1938, the Department of Justice and the Federal Trade Commission sued the “Big Five” studios, arguing that their vertically integrated model was anti-competitive.

After the Supreme Court decided in favor of the U.S. government in 1948 – in what became known as the Paramount Decisionthe studios were forced to sell off their theater chains, which checked their ability to squeeze theaters and squeeze out independent producers.

With the studios’ cartel power weakened, independent filmmakers like Elia Kazan and John Cassavetes flourished in the 1950s, making pictures like “On the Waterfrontthat the studios had rejected. Foreign films found their ways to American screens no longer constrained by block booking, a practice that forced exhibitors to pay for a lot of mediocre films if they wanted the good ones, too.

By the 1960s, a new generation of filmmakers like Mike Nichols and Stanley Kubrick scored big with audiences hungry for something different than the escapist spectacles Hollywood was green-lighting. They took risks by hiring respected writers and unknown actors to tell stories that were truer to life. In doing so, they flipped Hollywood’s generic formulas upside down.

A decade ago, I wrote about how Netflix’s streaming model pointed to a renaissance of innovative storytelling, similar to the period after the Paramount Decision.

By streaming their indie film “Beast of No Nation” directly to subscribers at home, Netflix posed a direct threat to Hollywood’s blockbuster model, in which studios invested heavily in a small number of big-budget films designed to earn enormous box office returns. At the time, Netflix’s 65 million global subscribers gave it the capital to produce exclusive content for its expanding markets.

Hollywood quickly closed the streaming gap, developing its own platforms and restricting access of its vast catalogs to subscribers.

Warner Bros. bought and sold

In 2018, AT&T acquired Time Warner, the biggest media conglomerate of the time, and DirectTV. It hoped to merge its 125 million-plus telecommunication customers with Time Warner’s content and create a streaming giant to compete with Netflix.

Then came the COVID-19 pandemic, and the theatrical model for film distribution collapsed.

The pressure on AT&T’s stock led the company to sell off HBO and WarnerMedia to Discovery in 2022 for $43 billion. Armed with the HBO and Warner Bros. libraries – along with the advertising potential of CNN, TNT and Turner Sports – CEO David Zaslav was bullish about the company’s potential for growth.

Warner Bros. Discovery became the third-largest streaming platform in terms of subscribers behind Netflix and Disney+, which had gobbled up 20th Century Fox.

But the results have been bad for audiences.

In 2023, Zaslav rolled out a bundled streaming platform called Max that combined the libraries of HBO Max and Discovery+, which ended up confusing consumers and the market. So it reverted back to HBO Max because consumers recognized the brand.

Zaslav then decided it was more cost effective to cancel innovative projects or write off completed films as losses. Zaslav often claims his deals are “good for consumers,” in that they get more content in one place. But conglomerates who defend their anti-competitive practices as signs of an efficient market that benefit “consumer welfare” frequently say that, even when they are making the product worse and limiting choices.

His deals have been especially bad for the television side, yielding gutted newsrooms and canceled scripted shows.

Effectively, in only three years, the Warner Bros. Discovery merger has validated nearly all the concerns that critics of “market first” policymaking have warned about for years. Once it had a dominant market share, the company started providing less and charging more.

Older man smiles and waves while wearing sunglasses and a white baseball cap reading 'Max.'
In 2023, Warner Bros. Discovery CEO David Zaslav attempted to merge HBO’s prestige programming with Discovery’s reality TV catalog under a broader, super-service called Max.
Kevin Dietsch/Getty Images

Meet the new boss – same as the old boss

If it does go through, the Netflix-Warner Bros. merger will likely please Wall Street, but it will further decrease the power of creators and consumers.

Like other companies that have moved from being a growth stock to a mature stock, Netflix is under pressure to be profitable. Indeed, it has been squeezing its subscribers with higher fees and more restrictive login protocols. It’s a sign of what tech blogger Cory Doctorow describes as the logic of “enshittification,” whereby platforms that have locked in audiences and producers start to squeeze both. Buying the competition – HBO Max – will mean Netflix can charge even more.

After the Netflix deal was announced, Paramount joined forces with President Donald Trump’s son-in-law Jared Kushner, the Saudi Sovereign Wealth fund and others to announce a hostile counteroffer.

Now, all bets are off. Whichever platform acquires Warner Bros. will have enormous power over the kind of stories that get sold and told.

In either case, Warner Bros. would be bought by a direct competitor. The Department of Justice, under the first Trump administration, already pushed to sunset the Paramount Decision, claiming that the distribution model had changed to such an extent that it was unlikely that Hollywood could ever reinstate its cartel. It’s hard to imagine that Trump 2.0 will forbid more media concentration, especially if the new parent company is friendly to the administration.

No matter which bidder becomes the belle of Trump’s ballroom, this merger illustrates how show business works: When dominant platforms also own the studios and their assets, they control the fate of the movie business – of actors, writers, producers and theaters.

Importantly, the concentration is taking place as artificial intelligence threatens to displace many aspects of film production. These corporate behemoths will determine if the film libraries spanning a century of Hollywood production will be used to train the machines that could replace artists and creatives. And with each prospective buyer taking on over $50 billion in bank debt to pay for the deal, the new parent of Warner Bros. will be looking everywhere for profits and opportunities to cut costs.

If history is any guide, there will be struggles ahead for consumers and competing creatives. In a media system that has veered back to following Hollywood’s yellow brick rules of the road, the new oligopolies are an awful lot like the old ones.

The Conversation

Matthew Jordan does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Whether Netflix or Paramount buys Warner Bros., entertainment oligopolies are back – bigger and more anticompetitive than ever – https://theconversation.com/whether-netflix-or-paramount-buys-warner-bros-entertainment-oligopolies-are-back-bigger-and-more-anticompetitive-than-ever-271479

Sleep problems and depression can be a vicious cycle, especially during pregnancy − here’s why it’s important to get help

Source: The Conversation – USA (3) – By Jenalee Doom, Associate Professor of Psychology, University of Denver

Restless or too little sleep can make us feel unfocused and indecisive the next day. Valerii Apetroaiei/iStock via Getty Images Plus

Imagine you got a rough night of sleep. Perhaps you went to bed too late, needed to wake up early or still felt tired when you woke up from what should have been a full night’s sleep.

For the rest of the day, you feel groggy and unfocused. Things that are usually fun or exciting don’t give you the same level of pleasure. You don’t have energy to exercise, so you avoid it. You don’t feel motivated to see friends, so you cancel plans with them. You focus on your rough day as you try to fall asleep that night and start to have anxiety about the next day. Instead of getting the restful night of sleep you need, you have another night of poor sleep. You become caught in a vicious cycle of poor sleep and depressed mood.

Sleep and mental health problems often go hand in hand. Sleep problems are a core symptom of depression. In addition, there is strong evidence that sleep problems contribute to many mental health disorders, including schizophrenia and post-traumatic stress disorder, or PTSD.

Yet our mental health also affects how well we sleep. Issues such as distressing thoughts and trouble relaxing can make it difficult for people to fall asleep or stay asleep, exacerbating sleep problems.

These issues are particularly pronounced during pregnancy, when the circular effects of inadequate sleep and mental health challenges can have harmful effects for mothers and their offspring.

We are a developmental psychologist and a doctoral student in psychology who study sleep and mental health from pregnancy through adulthood. As researchers in this field, we see the impacts of sleep and mental health problems firsthand.

Sleep and mental health problems are so entangled that it is unsurprising that they can each make the other worse. But it does make treating them more challenging.

Biology of sleep and mental health

Researchers and medical professionals know that sleep is essential for the body and brain to function properly.

Sleep is important for establishing circadian rhythms, which optimize alertness during the day and rest at night. When sunlight fades in the evening, the brain produces more of the hormone melatonin, and your core body temperature drops to promote sleep. When the brain detects sunlight, it reduces melatonin production, and body temperature increases to promote wakefulness.

Although light and dark are the most important signals to the brain about when you should be awake and when you should be sleeping, other things such as stress, disruptions in daily routines and social interactions can also throw off your circadian rhythm.

Circadian rhythms affect other important biological processes, including the body’s production of the stress hormone cortisol. Cortisol follows a daily rhythm where it is highest soon after waking in the morning and lowest in the middle of the night. Disruptions in normal sleep can lead to difficulties in daily regulation of cortisol levels, which can have negative effects on mental health and the ability to effectively manage stress.

Sleep is central to the proper functioning of the immune system, which in turn has implications for mental and physical health. Sleep disturbances have been linked to poorer immune responses against viruses and other challenges to the immune system, making it harder to stay healthy and to recover after getting sick.

Sleep disturbances also lead to greater inflammation, which is when the immune system’s natural responses become overactive. Inflammation underlies mental and physical health problems, including depression, heart disease and cancer.

Without adequate sleep, cognitive functions suffer and emotional resilience weakens.

How poor sleep leads to behavior changes

Chronic disruptions to a person’s natural circadian rhythm – such as people who work night shifts or who switch between day and night shifts – lead to greater risk for both depression and anxiety.

Shift work is an extreme example of disrupting the natural pattern of sleeping at night. However, less severe types of sleep problems, such as not getting enough sleep or waking up feeling tired, are also bad for mental health.

Sleep disruptions make it more difficult to regulate emotions. Having too little or poor quality sleep make handling everyday stressors more difficult. This is because sufficient sleep is necessary for effective problem-solving, memory and focusing. The combination of impaired emotion regulation and stress management abilities are a recipe for greater mental health difficulties.

One key reason why poor sleep and mental health struggles can become so problematic and difficult to treat is that without adequate sleep, it’s challenging to muster energy for healthy activities such as exercise and maintaining social relationships.

What’s more, when decision-making is impaired by poor sleep and negative emotions, people are more likely to reach for alcohol, drugs and unhealthy foods to cope with stress. These unhealthy behaviors can, in turn, reinforce the cycle by interfering with sleep.

In a sleep study on healthy adults, researchers found that lack of sleep causes overactivity in the amygdala, a crucial area of the brain where emotional processing occurs.

Sleep and mental health problems in pregnancy

These cycles between poor sleep and mental health challenges can be especially problematic during pregnancy.

Common pregnancy symptoms include nausea, heartburn, back and joint pain, cramps, a frequent urge to pee and contractions, all of which can make it more difficult to get restful sleep.

Sadly, around 76% of pregnant women report having sleep problems at some point in their pregnancy, compared with only 33% in the general population. Relatedly, about 1 in 5 pregnant women in the U.S. struggle with mental health problems such as anxiety and depression.

Our team’s new research, published in December 2025, further establishes these links between sleep and mental health. We found that during pregnancy, mental health problems contribute to sleep problems over time and that sleep problems in turn can exacerbate mental health problems.

This cycle can also have negative effects on the fetus and on the child after birth.

Prenatal sleep problems such as short sleep, sleep apnea and restless sleep can lead to preterm births and low birth weight in newborns.

A large study in Sweden in 2021 found that pregnant women who frequently worked the night shift or quickly shifted between night and day work in early pregnancy showed a three-to-four times greater risk for having a preterm birth. Preterm birth and low birthweight are associated with greater cardiovascular risk in both mothers and their offspring.

Prenatal maternal sleep problems can also lead to problems later in the child’s development. A review we also published in 2025 found that children of mothers who had sleep problems in pregnancy tend to have more sleep problems themselves. Our review also reported that children of mothers with prenatal sleep problems are more likely to develop obesity and have more behavioral problems in childhood.

Pregnant woman with large belly sleeping on her side with a pillow covering her face.
Poor sleep during pregnancy has serious implications for both the parent and the offspring.
Tassii/iStock via Getty Images Plus

Talking to your doctor about these concerns

In our opinion, it should be standard to screen for sleep problems at medical visits, given the potential implications of inadequate sleep for both mothers and their babies.

If you’re close to someone who is pregnant, consider asking how their sleep is and how they’re feeling. If they note ongoing sleep issues or emotional or behavior changes, you can ask if they have talked to their doctor.

They may feel overwhelmed and need support in talking to their doctor or help finding resources. The Sleep Foundation’s website has a list of sleeping tips for pregnant women as well as guidelines for when to speak with a doctor.

If you are the person experiencing these issues, you can report sleep problems to your doctor and ask for guidance for improving sleep.

If you’re experiencing difficulties with depression or anxiety, tell your doctor and ask for resources. There are mental health resources specific to pregnancy that can help. You can also find mental health professionals through Psychology Today’s find-a-therapist tool.

Healthy sleep is a necessity for improving your mental health during pregnancy and at all times of life.

The Conversation

Jenalee Doom receives funding from the National Institutes of Health.

Melissa Nevarez-Brewster receives funding from the National Institutes of Health.

ref. Sleep problems and depression can be a vicious cycle, especially during pregnancy − here’s why it’s important to get help – https://theconversation.com/sleep-problems-and-depression-can-be-a-vicious-cycle-especially-during-pregnancy-heres-why-its-important-to-get-help-264737

How a niche Catholic approach to infertility treatment became a new talking point for MAHA conservatives

Source: The Conversation – USA (3) – By Emma Kennedy, Assistant Professor of Christian Ethics, Villanova University

‘Restorative reproductive medicine’ has become a buzzword in some conservative circles, among people morally opposed to in vitro fertilization Jose Luis Pelaez Inc/DigitalVision via Getty Images

Along the 2024 presidential campaign trail, Donald Trump pledged to make in vitro fertilization, or IVF, free – part of his party’s wider push for a new American “baby boom.”

But in October 2025, when the administration revealed its IVF proposal, many health care experts pointed out that it falls short of mandating insurance companies to cover the procedure.

Since Trump returned to the White House, it has become clear just how fraught IVF is for his base. Some conservative Christians oppose IVF because it often involves destroying extra embryos not implanted in the woman’s uterus.

According to Politico, anti-abortion groups lobbied against a requirement for employers to cover IVF. Instead, some vouched for “restorative reproductive medicine” – a term that has been around for decades but has received much more attention, especially from conservatives, in the past few months.

Proponents of restorative reproductive medicine tend to present it as an alternative to IVF: a different way of treating infertility, focused on treating underlying causes. But the approach is controversial, and some practitioners closely link their treatments to Catholic teachings.

As a scholar of religion, I study U.S. Catholics’ varied perspectives on infertility, seeking to understand how religious beliefs and practices influence physicians’ and patients’ choices. Their perspectives help provide a more nuanced understanding of Christianity’s role in the U.S. reproductive and political landscape.

Defining restorative reproductive medicine

Clinics that advertise themselves as offering restorative reproductive medicine try to diagnose underlying issues that could make conception difficult, like endometriosis. Typically, a patient and provider will closely monitor the patient’s menstrual cycle to identify potential abnormalities. Interventions include hormone therapies, medications, supplements, surgeries and lifestyle changes.

An open notebook shows rows of pink and white test strips, one for each day, with March dates written beside them.
Some approaches to treating infertility focus on analyzing the patient’s menstrual cycle.
Iana Pronicheva/iStock via Getty Images Plus

Much of the approach resembles the initial testing used to evaluate patients in mainstream reproductive endocrinology and infertility clinics. However, restorative reproductive medicine clinics do not typically offer IVF or other assisted reproductive technologies.

Depending on who you ask, proponents are not necessarily opposed to IVF; they see their treatments as another option to explore. Some clinicians, however, closely link their treatment offerings to their religious commitments and opposition to abortion.

Restorative reproductive medicine has prompted criticism from professional medical organizations. The American Society for Reproductive Medicine issued a statement in May 2025 calling it a “rebranding” of standard infertility treatment, with “ideologically driven restrictions that could limit patient care.” The American College of Obstetricians and Gynecologists issued a brief warning that it is a “nonmedical approach” that threatens to impede access to IVF.

These critics are concerned that the focus on lifestyle changes and surgery may not address patients’ difficulties conceiving, while putting them through other unsuccessful treatments.

Church teachings

Today, restorative reproductive medicine is often described as gaining steam with conservative Christians and the “Make America Healthy Again,” or MAHA, movement. Its roots, though, are decades old, and largely Catholic.

Part of the Catholic Church’s objection to IVF stems from a concern that unused embryos are often discarded and destroyed. The church’s position is that all embryos ought to be treated with the same respect afforded a person – one of the key reasons its teachings oppose abortion.

Disapproval of IVF also stems from the church’s official teachings on marriage. According to this teaching, marriage has two chief ends, which it calls “procreation and union”: Typically, procreation is understood to mean having children, while union involves physical, emotional and spiritual intimacy. In this understanding, sexual intercourse should preserve what the church calls an “inseparable connection” between these two meanings.

The Catholic Church opposes artificial contraception because its goal is to block procreation. Instead, Catholics are encouraged to use “Natural Family Planning” – tracking a woman’s cycle so that couples can choose to abstain from sex during fertile periods. Similarly, it opposes artificial insemination and IVF because, by moving fertilization out of the body and into the lab, the process separates procreation from the act of sexual intercourse.

Survey data suggests most U.S. Catholics do not agree with these official stances, nor do they follow them.

Catholic doctors who do agree with official church teachings, however, have played a key role in developing infertility treatments that align with them. One of the most influential is Dr. Thomas W. Hilgers, who co-developed a “Natural Family Planning” method called the Creighton Model. In the early 1990s, he also developed NaProTechnology, an approach that seeks to identify fertility issues using cycle tracking, and then treat them with various medical and surgical interventions.

The NaProTechnology approach could be said to fall under the umbrella of restorative reproductive medicine, but it has mostly been used by Catholic reproductive health clinics and hospitals. Catholic physicians’ networks promote it, as do parishes and dioceses.

Navigating infertility

For Catholics who share the church’s official perspective on IVF, NaProTechnology and the clinics offering it are often a welcome alternative. Several of the Catholic women I interviewed as part of my academic research had also been to mainstream fertility clinics, but they felt that those providers did not offer much apart from IVF.

By contrast, the clinics offering NaProTechnology were often cheaper, in part because they do not offer IVF. They were also easier to navigate, since clinicians shared these patients’ religious views. Many felt that the providers were able to spend more time with them, helped them learn about their bodies, and were committed to understanding underlying issues beyond infertility.

However, others found clinics offering NaProTechnology to be lacking, often because clinicians weren’t up front about its limitations, especially when it comes to male infertility. Some patients felt that clinicians weren’t willing to admit drawbacks, for fear it would encourage couples to try IVF.

A rumpled medical gown with a light-blue print sits on top of an examining table.
Infertility treatments are a confusing landscape for many women.
Catherine McQueen/Moment via Getty Images

Most Catholics dealing with infertility, however, find themselves in mainstream clinical settings that offer IVF. Women I interviewed who opted for IVF were frank in their critiques of church teachings and their skepticism of Catholic clinics. Many took issue with the underlying assumption that the people who ought to be procreating are heterosexual, married couples and that conception is usually possible without the help of IVF.

However, many of these women were also dissatisfied with the approach that mainstream clinics take. Some felt that those clinics were focused on profit – a concern shared by some scholars scrutinizing the fertility industry. Some women also felt pressured to genetically test their embryos for chromosomal abnormalities and to discard unused embryos, even after explaining to staff that destroying them would be out of step with their moral commitments.

Understanding patient experiences in either kind of clinic helps underscore the difficulties many people face navigating infertility – and the stakes of policy reform.

The Trump administration’s plan largely maintains the status quo for IVF access while making more room for alternative treatments. But it intensifies questions about how the government responds to religious beliefs about reproductive health care, especially disagreements about the moral status of embryos. For now, patients and providers will continue to navigate a fractured landscape.

The Conversation

Emma Kennedy is affiliated with the Center for Genetics and Society.

ref. How a niche Catholic approach to infertility treatment became a new talking point for MAHA conservatives – https://theconversation.com/how-a-niche-catholic-approach-to-infertility-treatment-became-a-new-talking-point-for-maha-conservatives-265461