The Trump administration is decreasing the attention federal regulators pay to pipeline leaks. But leaks from natural gas pipelines don’t just waste energy and warm the planet – they can also make the air more dangerous to breathe. That air pollution threat grows not just in the communities where the leaks happen but also as far as neighboring states, as our analysis of gas leaks and air pollution levels across the U.S. has found.
For instance, in September 2018 the Merrimack Valley pipeline explosion in Massachusetts, which released roughly 2,800 metric tons of methane, damaged or destroyed about 40 homes and killed one person. We found that event caused fine-particle air pollution concentrations in downwind areas of New Hampshire and Vermont to spike within four weeks, pushing those areas’ 2018 annual average up by 0.3 micrograms per cubic meter. That’s an increase of about 3% of the U.S. EPA’s annual health standard for PM2.5. Elevated air pollution then showed up in New York and Connecticut through the rest of 2018 and into 2019.
In our study, we examined pipeline leak data from the U.S. Pipeline and Hazardous Materials Safety Administration from 2009 to 2019 and data about the state’s level of small particulate matter in the air from Columbia University’s Center for International Earth Science Information Network. We also incorporated, for each state, data on environmental regulations, per-capita energy consumption, urbanization rate and economic productivity per capita.
In simple terms, we found that in years when a state – or its neighboring states – experienced more methane leak incidents, that state’s annual average fine-particle air pollution was measurably higher than in years with fewer leaks.
A 2018 natural gas leak and explosion in Massachusetts destroyed and damaged homes, killed one person and increased air pollution over a wide area. John Tlumacki/The Boston Globe via Getty Images
Methane’s role in fine‑particle formation
Natural gas is primarily made of methane, a powerful greenhouse gas. But methane also helps set off chemical reactions in the air that lead to the formation of tiny particles known as PM2.5 because they are smaller than 2.5 micrometers (one ten-thousandth of an inch). They can travel deep into the lungs and cause health problems, such as increasing a person’s risk of heart disease and asthma.
So, when natural gas leaks, energy is wasted, the planet warms and air quality drops. These leaks can be massive, like the 2015 Aliso Canyon disaster in California, which sent around 100,000 metric tons of methane into the atmosphere.
But smaller leaks are also common, and they add up, too: Because the federal database systematically undercounts minor releases, we estimate that undocumented small leaks in the U.S. may total on the order of 15,000 metric tons of methane per year – enough to raise background PM2.5 by roughly 0.1 micrograms per cubic meter in downwind areas. Even this modest increase can contribute to health risks: There is no safe threshold for PM2.5 exposure, with each rise of 1 microgram per cubic meter linked to heightened mortality from cardiovascular and respiratory diseases.
The most direct way to reduce this problem is to reduce the number and quantity of methane leaks from pipelines. This could include constructing them in ways or with materials or processes that are less likely to leak. Regulations could create incentives to do so or require companies to invest in technology to detect methane leaks quickly, as well as encourage rapid responses when a leak is identified, even if it appears relatively small at first.
Reducing pipeline leaks would not just conserve the energy that is contained in the methane and reduce the global warming that results from increasing amounts of methane in the atmosphere. Doing so would also improve air quality in communities that are home to pipelines and in surrounding areas and states.
The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – USA (2) – By Camille Banger, Assistant Professor in Business Information Technology, University of Wisconsin-Stout
Students pick up on AI-infused apps quickly, but generative AI appears to require more reflection on how to use technology.Hill Street Studios via Getty Images
The tech world says generative artificial intelligence is essential for the future of work and learning. But as an educator, I still wonder: Is it really worth bringing it into the classroom? Will these tools truly help students learn, or create new challenges we haven’t yet faced?
Like many other people in higher education, I was skeptical but knew I couldn’t ignore it. So, instead of waiting for all the answers, I decided to dive in and discover what preparing students for an AI-powered world really means beyond the hype. Last semester, I developed a business technology class where the latest generative AI tools were woven into the curriculum.
What I found is that AI productivity products have a learning curve, much like other applications that students, and ultimately white-collar workers, use in knowledge work. But I needed to adjust how I taught the class to emphasize critical thinking, reflection on how these tools are being used and checks against the errors they produce.
The project
It’s no secret that generative AI is changing how people work, learn and teach. According to the 2025 McKinsey Global Survey on AI, 78% of respondents said their organizations use AI in at least one business function, and many are actively reskilling their workforce or training them with new skills to meet the demands of this shift.
As program director of the Business Information Technology bachelor’s degree program at the University of Wisconsin-Stout, Wisconsin’s polytechnic university, I spend a lot of time thinking about how to prepare students for the workplace. I’m also an AI enthusiast, but a skeptical one. I believe in the power of these tools, but I also know they raise questions about ethics, responsibility and readiness.
So, I asked myself: How can I make sure our students are ready to use AI and understand it?
In spring 2025, University of Wisconsin-Stout launched a pilot for a small group of faculty and staff to explore Microsoft 365 Copilot for business. Since it works alongside tools such as Word, Excel, Outlook, PowerPoint, OneDrive and Teams, which are products our students already use, I saw an opportunity to bring these latest AI features to them as well.
To do that, I built an exploratory project into our senior capstone course. Students were asked to use Copilot for Business throughout the semester, keep a journal reflecting on their experience and develop practical use cases for how AI could support them both as students and future professionals. I didn’t assign specific tasks. Instead, I encouraged them to explore freely.
My goal wasn’t to turn them into AI experts overnight. I wanted them to build comfort, fluency and critical awareness about how and when to use AI tools in real-world contexts.
What my students and I learned
What stood out to me the most was how quickly students moved from curiosity to confidence.
Many of them had already experimented with tools such as ChatGPT and Google Gemini, but Copilot for Business was a little different. It worked with their own documents, emails, meeting notes and class materials, which made the experience more personal and immediately relevant.
In their journals, students described how they used Copilot to summarize Teams video meetings, draft PowerPoint slides and write more polished emails. One student said it saved them time by generating summaries they could review after a meeting instead of taking notes during the call or rewatching a recording. Another used it to check their assignment against the rubric – a scoring tool that outlines the criteria and performance levels for assessing student work – to help them feel more confident before submitting their work.
College students will likely be asked to use AI features in business productivity applications once they enter the workforce. What’s the best way to teach them how to effectively use them? Denise Jans on Unsplash
Several students admitted they struggled at first to write effective prompts – the typed requests that guide the AI to generate content – and had to experiment to get the results they wanted. A few reflected on instances where Copilot, like other generative AI tools, produced inaccurate or made-up information, or hallucinations, and said they learned to double-check its responses. This helped them understand the importance of verifying AI-generated content, especially in academic and professional settings.
Some students also said they had to remind themselves to use Copilot instead of falling back on other tools they were more familiar with. In some cases, they simply forgot Copilot was available. That feedback showed me how important it is to give students time and space to build new habits around emerging technologies.
What’s next
While Copilot for Business worked well for this project, its higher cost compared with previous desktop productivity apps may limit its use in future classes and raises ethical questions about access.
That said, I plan to continue expanding the use of generative AI tools across my courses. Instead of treating AI as a one-off topic, I want it to become part of the flow of everyday academic work. My goal is to help students build AI literacy and use these tools responsibly and thoughtfully, as a support for their learning, not a replacement for it.
Historically, software programs enabled people to produce content, such as text documents, slides or the like, whereas generative AI tools produce the “work” based on user prompts. This shift requires a higher level of awareness about what students are learning and how they’re engaging with the materials and the AI tool.
This pilot project reminded me that integrating AI into the classroom isn’t just about giving students access to new tools. It’s about creating space to explore, experiment, reflect and think critically about how these tools fit into their personal and professional lives and, most importantly, how they work.
As an educator, I’m also thinking about the deeper questions this technology raises. How do we ensure that students continue developing original thoughts and critical thinking when AI can easily generate ideas or content? How can we preserve meaningful learning while still taking advantage of the efficiency these tools offer? And what kinds of assignments can help students use AI effectively while still demonstrating their own thinking?
These questions aren’t easy, but they are important. Higher education has an important role to play in helping students use AI and understand its impact and their responsibility in shaping how it’s used.
Striking the right balance between fostering original thought and critical thinking with AI can be tricky. One way I’ve approached this is encouraging students to first create their content on their own, then use AI for review. This way, they maintain ownership of their work and see AI as a helpful tool rather than a shortcut. It’s all about knowing when to leverage AI to refine or enhance their ideas.
One piece of advice I received that really stuck with me was this: Start small, be transparent and talk openly with your students. That’s what I did, and it’s what I’ll continue doing as I enter this next chapter of teaching and learning in the age of AI.
Camille Banger does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – USA (2) – By Jennifer Duyne Barenstein, Senior Lecturer of Social Anthropology, Swiss Federal Institute of Technology Zurich
Located in the Peñarol neighborhood of Montevideo, COVIMT 1 was the city’s first mutual aid housing cooperative. It was founded by textile workers, who completed construction of the complex in 1972.Bé Estudio, CC BY-SA
More than 1.8 billion people lack access to adequate and affordable housing. Yet too few countries have taken meaningful steps to ensure dignified housing for their most vulnerable citizens.
We research how cooperative housing can serve as one solution to the affordable housing crisis. There are a variety of cooperative housing models. But they generally involve residents collectively owning and managing their apartment complexes, sharing responsibilities, costs and decision-making through a democratic process.
Other countries, such as El Salvador and Colombia, have struggled to integrate housing cooperatives into their countries’ preexisting housing policies. In fact, although Latin America has a long-standing tradition of community-driven and mutual aid housing, housing cooperatives haven’t taken root in many places, largely due to weak political and institutional backing.
Uruguay is an exception.
With a population of just 3.4 million, the small Latin American nation has a robust network of housing cooperatives, which give access to permanent, affordable housing to citizens at a range of income levels.
An experiment becomes law
Housing cooperatives in Uruguay emerged in the 1960s during a time of deep economic turmoil.
The first few pilot projects delivered outstanding results. Financed through a mix of government funds, loans from the Inter-American Development Bank and member contributions, they were more cost-effective, faster to build and higher in quality than conventional housing.
These early successes played a key role in the passage of Uruguay’s National Housing Law in 1968. This law formally recognized housing cooperatives and introduced a legal framework that supported different models. The most common models to emerge roughly translate to “savings cooperatives” and “mutual aid cooperatives.”
In the savings model, members pool their savings to contribute around 15% of the capital investment. This gives them access to a government-subsidized mortgage to finance the construction. The cooperative then determines how repayment responsibilities are distributed among its members. Typically, members purchase “social shares” in the cooperative, equivalent to the cost of the assigned housing unit. If a member decides to leave the cooperative, their social shares are reimbursed. These shares are also inheritable, allowing them to be passed on to heirs.
In contrast, the mutual aid model enables households without savings to participate by contributing 21 hours per week toward construction efforts. Tasks are assigned to individuals according to their abilities. They can range from manual labor to administrative tasks, such as the ordering of construction materials.
Despite their differences, both models share a fundamental principle: The land and housing units are held collectively and are permanently removed from the private market.
Typically, once cooperatives are established, each household must contribute a monthly fee that covers the repayment of the state’s loan and maintenance costs. In exchange, members have an unlimited and inheritable contract of “use and enjoyment” of a quality apartment. If a member decides to leave, they are partially reimbursed for the contributions they’ve made over time, typically with a 10% deduction that the cooperative keeps.
This ensures that cooperative housing provides long-term security and remains affordable, especially for those at the lowest rungs of the income ladder.
This growth has been possible thanks to state support, federations of cooperatives and nonprofit groups.
The state recognized that the success of housing cooperatives depended on sustained public support. The National Housing Law defined the rights and responsibilities of cooperatives. It also outlined the state’s obligations: overseeing operations, setting criteria for financial assistance and providing access to land.
Beyond organizing and advocating for the right to housing – and human rights more broadly – FUCVAM offers its member cooperatives a wide range of support services, including training to strengthen cooperative management, legal counseling and conflict mediation.
Finally, a vital pillar of this model are the Technical Assistance Institutes, which were also recognized by the National Housing Law. These are independent, nonprofit organizations that advise cooperatives.
Their role is crucial: The construction of large-scale housing projects is complicated. The vast majority of citizens have no prior experience in construction or project management. The success of Uruguay’s cooperative model would be unthinkable without their support.
From the outskirts to the city center
Uruguay’s housing cooperatives have not only expanded, but have also evolved in response to changing needs and challenges.
In their early years, most cooperatives built low-density housing on the outskirts of cities. This approach was largely influenced by the ideals of the Garden City movement, a planning philosophy of the late 19th century that prioritized low-density housing and a balance between development and green spaces. In Uruguay, there was also a cultural preference for single-family homes. And land was more expensive in city centers.
These early cooperatives, however, contributed to urban sprawl, which has a number of drawbacks. Infrastructure has to be built out. It’s harder to reach jobs and schools. There’s more traffic. And single-family homes aren’t an efficient use of land.
Meanwhile, in the 1970s Montevideo’s historic city center started experiencing abandonment and decay. During this period, the country’s shifting socioeconomic landscape created a set of new challenges. More people relied on irregular incomes from informal work, while more single women became heads of households.
In response, housing cooperatives have shown a remarkable ability to adapt.
For women, by women
As urban sprawl pushed development outward, Montevideo’s historic center, Ciudad Vieja, was hemorrhaging residents. Its historic buildings were falling apart.
Seeking to revitalize the area without displacing its remaining low-income residents, the city saw housing cooperatives as a solution.
This spurred the creation of 13 mutual aid cooperatives in Ciudad Vieja, which now account for approximately 6% of all housing units in the area.
One of the pioneers of this effort was Mujeres Jefas de Familia, which translates to Women Heads of Household. Known by the acronym MUJEFA, it was founded in 1995 by low-income, single mothers. MUJEFA introduced a new approach to cooperative housing: homes designed, built and governed with the unique needs of women in mind.
Architect Charna Furman spearheaded the initiative. She wanted to overcome the structural inequalities that prevent women from finding secure housing: financial dependence on men, being primary caregivers, and the absence of housing policies that account for single women’s limited access to economic resources.
Remaining in Ciudad Vieja was important to members of MUJEFA. Its central location allowed them to be close to their jobs, their kids’ schools, health clinics and a close-knit community of friends and family.
However, the project faced major hurdles. The crumbling structure the group acquired in 1991 – an abandoned, heritage-listed building – needed to be transformed into 12 safe, functional apartments.
The cooperative model had to adapt. Municipal authorities temporarily relaxed certain regulations to allow older buildings to be rehabbed as cooperatives. There was also the challenge of organizing vulnerable people – often long-time residents at risk of eviction, who were employed as domestic workers or street vendors – into groups that could actively participate in the renovation process. And they had to be taught how to retrofit an older building.
Today, 12 women with their children live in the MUJEFA cooperative. It’s a compelling example of how cooperative housing can go beyond simply putting a roof over families’ heads. Instead, it can be a vehicle for social transformation. Women traditionally excluded from urban planning were able to design and construct their own homes, creating a secure future for themselves and their children.
Building up, not out
COVIVEMA 5, completed in 2015, was the first high-rise, mutual aid cooperative in a central Montevideo neighborhood. Home to around 300 residents, it’s made up of 55 units distributed across two buildings.
Members participated in the building process with guidance from the Centro Cooperativista Uruguayo, one of the oldest and most respected Technical Assistance Institutes. Architects had to adapt their designs to make it easier for regular people with little experience in construction to complete a high-rise building. Cooperative members received specialized training in vertical construction and safety protocols. While members contributed to the construction, skilled labor would be brought in when necessary.
Members of the cooperative also designed and built Plaza Luisa Cuesta, a public square that created open space in an otherwise dense neighborhood for residents to gather and socialize.
Housing cooperatives are neither public nor private. They might be thought of as an efficient and effective “third way” to provide housing, one that gives residents a stake in their homes and provides long-term security. But their success depends upon institutional, technical and financial support.
Jennifer Duyne Barenstein receives funding from The Swiss National Science Foundation. She is affiliated with the Centre for Research on Architecture, Society and the Built Environment, Department of Architecture, ETH Zurich
Daniela Sanjinés receives funding from the Swiss National Science Foundation. She is affiliated with the Centre for Research on Architecture, Society and the Built Environment, Department of Architecture, ETH Zurich.
Source: The Conversation – USA – By Paul M. Collins Jr., Professor of Legal Studies and Political Science, UMass Amherst
Emil Bove, Donald Trump’s nominee to serve as a federal appeals judge for the 3rd Circuit, is sworn in during a confirmation hearing in Washington, D.C., on June 25, 2025. Bill Clark/CQ-Roll Call, Inc, via Getty Images
On June 24, 2025, Erez Reuveni, a former Department of Justice attorney who worked with Bove, released an extensive, 27-page whistleblower report. Reuveni claimed that Bove, as the Trump administration’s acting deputy attorney general, said “that it might become necessary to tell a court ‘fuck you’” and ignore court orders related to the administration’s immigration policies. Bove’s acting role ended on March 6 when he resumed his current position of principal associate deputy attorney general.
When asked about this statement at his June 25 Senate confirmation hearing, Bove said, “I don’t recall.”
And on July 15, 80 former federal and state judges signed a letter opposing Bove’s nomination. The letter argued that “Mr. Bove’s egregious record of mistreating law enforcement officers, abusing power, and disregarding the law itself disqualifies him for this position.”
A day later, more than 900 former Department of Justice attorneys submitted their own letter opposing Bove’s confirmation. The attorneys argued that “Few actions could undermine the rule of law more than a senior executive branch official flouting another branch’s authority. But that is exactly what Mr. Bove allegedly did through his involvement in DOJ’s defiance of court orders.”
On July 17, Democrats walked out of the Senate Judiciary Committee vote, in protest of the refusal by Chairman Chuck Grassley, a Republican from Iowa, to allow further investigation and debate on the nomination. Republicans on the committee then unanimously voted to move the nomination forward for a full Senate vote.
As a scholar of the courts, I know that most federal court appointments are not as controversial as Bove’s nomination. But highly contentious nominations do arise from time to time.
Here’s how three controversial nominations turned out – and how Bove’s nomination is different in a crucial way.
Robert Bork testifies before the Senate Judiciary Committee for his confirmation as associate justice of the Supreme Court in September 1987. Mark Reinstein/Corbis via Getty Images
Robert Bork
Bork is the only federal court nominee whose name became a verb.
“Borking” is “to attack or defeat (a nominee or candidate for public office) unfairly through an organized campaign of harsh public criticism or vilification,” according to Merriam-Webster.
In opposing the Bork nomination, Sen. Ted Kennedy of Massachusetts took the Senate floor and gave a fiery speech: “Robert Bork’s America is a land in which women would be forced into back-alley abortions, blacks would sit at segregated lunch counters, rogue police could break down citizens’ doors in midnight raids, schoolchildren could not be taught about evolution, writers and artists could be censored at the whim of government, and the doors of the federal courts would be shut on the fingers of millions of citizens for whom the judiciary is often the only protector of the individual rights that are the heart of our democracy.”
Ultimately, Bork’s nomination failed by a 58-42 vote in the Senate, with 52 Democrats and six Republicans rejecting the nomination.
Republican Sen. John Ashcroft, from White’s home state of Missouri, led the fight against the nomination. Ashcroft alleged that White’s confirmation would “push the law in a pro-criminal direction.” Ashcroft based this claim on White’s comparatively liberal record in death penalty cases as a judge on the Missouri Supreme Court.
However, there was limited evidence to support this assertion. This led someto believe that Ashcroft’s attack on the nomination was motivated by stereotypes that African Americans, like White, are soft on crime.
Even Clinton implied that race may be a factor in the attacks on White: “By voting down the first African-American judge to serve on the Missouri Supreme Court, the Republicans have deprived both the judiciary and the people of Missouri of an excellent, fair, and impartial Federal judge.”
White’s nomination was defeated in the Senate by a 54-45 party-line vote. In 2014, White was renominated to the same judgeship by President Barack Obama and confirmed by largely party-line 53-44 vote, garnering the support of a single Republican, Susan Collins of Maine.
Ronnie White, a former justice for the Missouri Supreme Court, testifies during an attorney general confirmation hearing in Washington in January 2001. Alex Wong/Newsmakers
Estrada, who had earned a unanimous “well-qualified” rating from the American Bar Association, faced deep opposition from Senate Democrats, who believed he was a conservative ideologue. They also worried that, if confirmed, he would later be appointed to the Supreme Court.
Miguel Estrada, President George Bush’s nominee to the U.S. Court of Appeals for the District of Columbia, is sworn in during his hearing before Senate Judiciary on Sept. 26, 2002. Scott J. Ferrell/Congressional Quarterly/Getty Images
However, unlike Bork – who had an extensive paper trail as an academic and judge – Estrada’s written record was very thin.
Democrats sought to use his confirmation hearing to probe his beliefs. But they didn’t get very far, as Estrada dodged many of the senators’ questions, including ones about Supreme Court cases he disagreed with and judges he admired.
Democrats were particularly troubled by allegations that Estrada, when he was screening candidates for Justice Anthony Kennedy, disqualified applicants for Supreme Court clerkships based on their ideology.
According to one attorney: “Miguel told me his job was to prevent liberal clerks from being hired. He told me he was screening out liberals because a liberal clerk had influenced Justice Kennedy to side with the majority and write a pro-gay-rights decision in a case known as Romer v. Evans, which struck down a Colorado statute that discriminated against gays and lesbians.”
When asked about this at his confirmation hearing, Estrada initially denied it but later backpedaled. Estrada said, “There is a set of circumstances in which I would consider ideology if I think that the person has some extreme view that he would not be willing to set aside in service to Justice Kennedy.”
Unlike the Bork nomination, Democrats didn’t have the numbers to vote Estrada’s nomination down. Instead, they successfully filibustered the nomination, knowing that Republicans couldn’t muster the required 60 votes to end the filibuster. This marked the first time in Senate history that a court of appeals nomination was filibustered. Estrada would never serve as a judge.
Bove stands out
As the examples of Bork, Estrada and White make clear, contentious nominations to the federal courts often involve ideological concerns.
This is also true for Bove, who is opposed in part because of the perception that he is a conservativeideologue.
This makes Bove stand out among contentious federal court nominations.
Paul M. Collins Jr. does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – USA – By Stephanie A. (Sam) Martin, Frank and Bethine Church Endowed Chair of Public Affairs, Boise State University
Congress’ cuts to public broadcasting will diminish the range and volume of the free press and the independent reporting it provides.MicroStockHub-iStock/Getty Images Plus
Champions of the almost entirely party-line vote in the U.S. Senate to erase US$1.1 billion in already approved funds for the Corporation for Public Broadcasting called their action a refusal to subsidize liberal media.
“Public broadcasting has long been overtaken by partisan activists,” said U.S. Sen. Ted Cruz of Texas, insisting there is no need for government to fund what he regards as biased media. “If you want to watch the left-wing propaganda, turn on MSNBC,” Cruz said.
Accusing the media of liberal bias has been a consistent conservative complaint since the civil rights era, when white Southerners insisted news outlets were slanting their stories against segregation. During his presidential campaign in 1964, U.S. Sen. Barry Goldwater of Arizona complained that the media was against him, an accusation that has been repeated by every Republican presidential candidate since.
But those charges of bias rarely survive empirical scrutiny.
That independence in the United States – enshrined in the press freedom clause of the First Amendment – gives journalists the ability to hold government accountable, expose abuses of power and thereby support democracy.
GOP Sen. Ted Cruz speaks to reporters as Senate Republicans vote on President Donald Trump’s request to cancel about $9 billion in foreign aid and public broadcasting spending on July 16, 2025. AP Photo/J. Scott Applewhite
Trusting independence
Ad Fontes Media, a self-described “public benefit company” whose mission is to rate media for credibility and bias, have placed the reporting of “PBS NewsHour” under 10 points left of the ideological center. They label it as both “reliable” and based in “analysis/fact.” “Fox and Friends,” by contrast, the popular morning show on Fox News, is nearly 20 points to the right. The scale starts at zero and runs 42 points to the left to measure progressive bias and 42 points to the right to measure conservative bias. Ratings are provided by three-person panels comprising left-, right- and center-leaning reviewers.
A similar 2016 study published in Public Opinion Quarterly said that media are more similar than dissimilar and, excepting political scandals, “major
news organizations present topics in a largely nonpartisan manner,
casting neither Democrats nor Republicans in a particularly favorable
or unfavorable light.”
Surveys show public media’s audiences do not see it as biased. A national poll of likely voters released July 14, 2025, found that 53% of respondents trust public media to report news “fully, accurately and fairly,” while only 35% extend that trust to “the media in general.” A majority also opposed eliminating federal support.
Contrast these numbers with attitudes about public broadcasters such as MTVA in Hungary or the TVP in Poland, where the state controls most content. Protests in Budapest October 2024 drew thousands demanding an end to “propaganda.” Oxford’s Reuters Institute for the Study of Journalism reports that TVP is the least trusted news outlet in the country.
While critics sometimes conflate American public broadcasting with state-run outlets, the structures are very different.
Safeguards for editorial freedom
In state-run media systems, a government agency hires editors, dictates coverage and provides full funding from the treasury. Public officials determine – or make up – what is newsworthy. Individual media operations survive only so long as the party in power is happy.
Public broadcasting in the U.S. works in almost exactly the opposite way: The Corporation for Public Broadcasting is a private nonprofit with a statutory “firewall” that forbids political interference.
Stations survive by combining this modest federal grant money with listener donations, underwriting and foundation support. That creates a diversified revenue mix that further safeguards their editorial freedom.
As a public-private partnership, individual communities mostly own the public broadcasting system and its affiliate stations. Congress allocates funds, while community nonprofits, university boards, state authorities or other local license holders actually own and run the stations. Individual monthly donors are often called “members” and sometimes have voting rights in station-governance matters. Membership contributions make up the largest share of revenue for most stations, providing another safeguard for editorial independence.
A host and guest in July 2024 sit inside a recording studio at KMXT, the public radio station on Kodiak Island in Alaska. Nathaniel Herz/Northern Journal
A 2021 report from the European Broadcasting Union links public broadcasting with higher voter turnout, better factual knowledge and lower susceptibility to extremist rhetoric.
Experts warn that even small cuts will exacerbate an already pernicious problem with political disinformation in the U.S., as citizens lose access to free information that fosters media literacy and encourages trust across demographics.
In many ways, public media remains the last broadly shared civic commons. It is both commercial-free and independently edited.
Another study, by the University of Pennsylvania’s Annenberg School in 2022, affirmed that “countries with independent and well-funded public broadcasting systems also consistently have stronger democracies.”
Such attention to nuance provides a critical counterweight to the fragmented, often hyperpartisan news bubbles that pervade cable news and social media. And this skillful, more balanced treatment helps to ameliorate political polarization and misinformation.
In all, public media’s unique structure and mission make democracy healthier in the U.S. and across the world. Public media prioritizes education and civic enlightenment. It gives citizens important tools for navigating complex issues to make informed decisions – whether those decisions are about whom to vote for or about public policy itself. Maintaining and strengthening public broadcasting preserves media diversity and advances important principles of self-government.
Congress’ cuts to public broadcasting will diminish the range and volume of the free press and the independent reporting it provides. Ronald Reagan once described a free press as vital for the United States to succeed in its “noble experiment in self-government.” From that perspective, more independent reporting – not less – will prove the best remedy for any worry about partisan spin.
Stephanie A. (Sam) Martin does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Why is heart cancer so rare? – Jackson, age 12, Davis, California
You probably know someone who is affected by cancer. This disease results when cells divide uncontrollably and can make a person sick, sometimes very seriously.
I’m a biologist who specializes in the blood vessels of the cardiovascular system. A big part of my work focuses on how cells interact with their environment to regulate the function of tissues and organs. Disease can develop when things go wrong.
Turns out, heart cells have unique features that make them super resistant to cancer.
How cancer starts
Cells produce more cells to grow, replace older or worn-out cells or to repair damaged tissues. This process is called cell division. Each type of cell in the body divides at different rates based on multiple factors, including what their function is and a person’s age.
For example, the cells of a growing human embryo divide extremely fast, undergoing four divisions in three days. The cells that make up the skin, nails and hair regularly replenish across your lifespan. Bone cells divide at a rate that will give you an entirely new skeleton approximately every 10 years.
Whether and how often a cell divides is tightly regulated by a series of molecular checkpoints. During cell division, genes within DNA are duplicated and evenly distributed into two daughter cells. Damage to these genes caused by exposure to harmful chemicals, ultraviolet light or radiation can result in mutations that cause disease. Mutations can just happen randomly, too. When there are mutations on the genes regulating cell division, cancer can develop.
Cells move through a series of checkpoints before division. OpenStax, CC BY-SA
This low rate of cell division in the adult heart likely serves as its primary defense against cancer. The less often a cell divides, the fewer opportunities there are for mistakes during DNA replication.
The heart’s location in the body gives it more protection from certain cancer-causing factors. OpenStax, CC BY-SA
The heart is also less directly exposed to cancer-causing factors, such as UV light on the skin or inhaled substances in the lung, due to its protected location in the chest.
Unfortunately, the heart’s low rate of cell division has some downsides, such as a reduced ability to repair and replace cells damaged by disease, injury or aging.
Why heart cancer still happens
Even with the heart’s resistance to cancer, tumors may still form.
When cancer is found in the heart, it’s often the result of cancer cells migrating from another part of the body to the heart. This process is called metastasis. Certain types of skin cancers or cancers in the chest are more likely to spread to the heart, though this is still rare.
When they do happen, heart tumors can be quite serious and more aggressive than other cancers. A study analyzing more than 100,000 heart cancer cases in the United States found that patients who underwent surgery and chemotherapy to treat their heart cancer survived longer than those who did not.
Successful cancer care spans multiple areas of medicine. These include palliative care, which focuses on relieving pain and addressing symptoms, and integrative medicine, which considers the mind-body-spirit connection.
Heart cancer holds clues to heart regeneration
Understanding how heart cells divide and what causes that process to change offers clues about disease and shapes ideas for new treatments.
For example, research into how heart cells divide helps scientists better understand why the heart doesn’t heal well after a heart attack. Researchers found that although failing hearts have more dividing cells than healthy hearts, they need help to recover fully.
New technologies, such as the ability to reprogram blood cells into heart cells, have allowed researchers to develop new heart disease models to study and one day achieve heart regeneration. This opens doors for new treatments for heart diseases, including cancer.
Understanding why cancer doesn’t happen is just as important for developing new and better treatments as knowing why it does. The answers to both questions lie truly at the heart.
Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the city where you live.
And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.
Julie Phillippi receives funding from the National Heart Lung and Blood Institute.
“India is on the Moon,” S. Somanath, chairman of the Indian Space Research Organization, announced in August 2023. The announcement meant India had joined the short list of countries to have visited the Moon, and the applause and shouts of joy that followed signified that this achievement wasn’t just a scientific one, but a cultural one.
India’s successful lunar landing prompted celebrations across the country, like this one in Mumbai. AP Photo/Rajanish Kakade
With more countries joining the evolving space economy, many of our colleagues in space strategy, policy ethics and law have celebrated the democratization of space: the hope that space is now more accessible for diverse participants.
Major players like the U.S., the European Union and China may once have dominated space and seen it as a place to try out new commercial and military ventures. Emerging new players in space, like other countries, commercial interests and nongovernmental organizations, may have other goals and rationales. Unexpected new initiatives from these newcomers could shift perceptions of space from something to dominate and possess to something more inclusive, equitable and democratic.
We address these emerging and historical tensions in a paper published in May 2025 in the journal Nature, in which we describe the difficulties and importance of including nontraditional actors and Indigenous peoples in the space industry.
Continuing inequalities among space players
Not all countries’ space agencies are equal. Newer agencies often don’t have the same resources behind them that large, established players do.
The U.S. and Chinese programs receive much more funding than those of any other country. Because they are most frequently sending up satellites and proposing new ideas puts them in the position to establish conventions for satellite systems, landing sites and resource extraction that everyone else may have to follow.
Sometimes, countries may have operated on the assumption that owning a satellite would give them the appearance of soft or hard geopolitical power as a space nation – and ultimately gain relevance.
Small satellites, called CubeSats, are becoming relatively affordable and easy to develop, allowing more players, from countries and companies to universities and student groups, to have a satellite in space. NASA/Butch Wilmore, CC BY-NC
In reality, student groups of today can develop small satellites, called CubeSats, autonomously, and recent scholarship has concluded that even successful space missions may negatively affect the international relationships between some countries and their partners. The respect a country expects to receive may not materialize, and the costs to keep up can outstrip gains in potential prestige.
Environmental protection and Indigenous perspectives
Usually, building the infrastructure necessary to test and launch rockets requires a remote area with established roads. In many cases, companies and space agencies have placed these facilities on lands where Indigenous peoples have strong claims, which can lead to land disputes, like in western Australia.
Many of these sites have already been subject to human-made changes, through mining and resource extraction in the past. Many sites have been ground zero for tensions with Indigenous peoples over land use. Within these contested spaces, disputes are rife.
Because of these tensions around land use, it is important to include Indigenous claims and perspectives. Doing so can help make sure that the goal of protecting the environments of outer space and Earth are not cast aside while building space infrastructure here on Earth.
Some efforts are driving this more inclusive approach to engagement in space, including initiatives like “Dark and Quiet Skies”, a movement that works to ensure that people can stargaze and engage with the stars without noise or sound pollution. This movement and other inclusive approaches operate on the principle of reciprocity: that more players getting involved with space can benefit all.
Researchers have recognized similar dynamics within the larger space industry. Some scholars have come to the conclusion that even though the space industry is “pay to play,” commitments to reciprocity can help ensure that players in space exploration who may not have the financial or infrastructural means to support individual efforts can still access broader structures of support.
The downside of more players entering space is that this expansion can make protecting the environment – both on Earth and beyond – even harder.
The more players there are, at both private and international levels, the more difficult sustainable space exploration could become. Even with good will and the best of intentions, it would be difficult to enforce uniform standards for the exploration and use of space resources that would protect the lunar surface, Mars and beyond.
It may also grow harder to police the launch of satellites and dedicated constellations. Limiting the number of satellites could prevent space junk, protect the satellites already in orbit and allow everyone to have a clear view of the night sky. However, this would have to compete with efforts to expand internet access to all.
The amount of space junk in orbit has increased dramatically since the 1960s.
What is space exploration for?
Before tackling these issues, we find it useful to think about the larger goal of space exploration, and what the different approaches are. One approach would be the fast and inclusive democratization of space – making it easier for more players to join in. Another would be a more conservative and slower “big player” approach, which would restrict who can go to space.
The conservative approach is liable to leave developing nations and Indigenous peoples firmly on the outside of a key process shaping humanity’s shared future.
But a faster and more inclusive approach to space would not be easy to run. More serious players means it would be harder to come to an agreement about regulations, as well as the larger goals for human expansion into space.
Narratives around emerging technologies, such as those required for space exploration, can change over time, as people begin to see them in action.
Technology that we take for granted today was once viewed as futuristic or fantastical, and sometimes with suspicion. For example, at the end of the 1940s, George Orwell imagined a world in which totalitarian systems used tele-screens and videoconferencing to control the masses.
Earlier in the same decade, Thomas J. Watson, then president of IBM, notoriously predicted that there would be a global market for about five computers. We as humans often fear or mistrust future technologies.
However, not all technological shifts are detrimental, and some technological changes can have clear benefits. In the future, robots may perform tasks too dangerous, too difficult or too dull and repetitive for humans. Biotechnology may make life healthier. Artificial intelligence can sift through vast amounts of data and turn it into reliable guesswork. Researchers can also see genuine downsides to each of these technologies.
Space exploration is harder to squeeze into one streamlined narrative about the anticipated benefits. The process is just too big and too transformative.
To return to the question if we should go to space, our team argues that it is not a question of whether or not we should go, but rather a question of why we do it, who benefits from space exploration and how we can democratize access to broader segments of society. Including a diversity of opinions and viewpoints can help find productive ways forward.
Ultimately, it is not necessary for everyone to land on one single narrative about the value of space exploration. Even our team of four researchers doesn’t share a single set of beliefs about its value. But bringing more nations, tribes and companies into discussions around its potential value can help create collaborative and worthwhile goals at an international scale.
Tony Milligan receives funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 856543).
Adam Fish, Deondre Smiles, and Timiebi Aganaba do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
A submersible, which travels to the seafloor to collect rock and microbe samples, is lifted by the arm of a research vessel. James F. Holden
People have long wondered what life was first like on Earth, and if there is life in our solar system beyond our planet. Scientists have reason to believe that some of the moons in our solar system – like Jupiter’s Europa and Saturn’s Enceladus – may contain deep, salty liquid oceans under an icy shell. Seafloor volcanoes could heat these moons’ oceans and provide the basic chemicals needed for life.
Similar deep-sea volcanoes found on Earth support microbial life that lives inside solid rock without sunlight and oxygen. Some of these microbes, called thermophiles, live at temperatures hot enough to boil water on the surface. They grow from the chemicals coming out of active volcanoes.
Because these microorganisms existed before there was photosynthesis or oxygen on Earth, scientists think these deep-sea volcanoes and microbes could resemble the earliest habitats and life on Earth, and beyond.
However, for planetary scientists to interpret the data they collect, they need to first understand how similar habitats function and host life on Earth.
I grew up in Spokane, Washington, and had over an inch of volcanic ash land on my home when Mount St. Helens erupted in 1980. That event led to my fascination with volcanoes.
Several years later, while studying oceanography in college, I collected samples from Mount St. Helens’ hot springs and studied a thermophile from the site. I later collected samples at hydrothermal vents along an undersea volcanic mountain range hundreds of miles off the coast of Washington and Oregon. I have continued to study these hydrothermal vents and their microbes for nearly four decades.
The samples collected include rocks and heated hydrothermal fluids that rise from cracks in the seafloor.
The submarines use mechanical arms to collect the rocks and special sampling pumps and bags to collect the hydrothermal fluids. The submarines usually remain on the seafloor for about a day before returning samples to the surface. They make multiple trips to the seafloor on each expedition.
Inside the solid rock of the seafloor, hydrothermal fluids as hot at 662 degrees Fahrenheit (350 Celsius) mix with cold seawater in cracks and pores of the rock. The mixture of hydrothermal fluid and seawater creates the ideal temperatures and chemical conditions that thermophiles need to live and grow.
When the submarines return to the ship, scientists – including my research team – begin analyzing the chemistry, minerals and organic material like DNA in the collected water and rock samples.
These samples contain live microbes that we can cultivate, so we grow the microbes we are interested in studying while on the ship. The samples provide a snapshot of how microbes live and grow in their natural environment.
Thermophiles in the lab
Back in my laboratory in Amherst, my research team isolates new microbes from the hydrothermal vent samples and grows them under conditions that mimic those they experience in nature. We feed them volcanic chemicals like hydrogen, carbon dioxide, sulfur and iron and measure their ability to produce compounds like methane, hydrogen sulfide and the magnetic mineral magnetite.
The thermophilic microbe Pyrodictium delaneyi isolated by the Holden lab from a hydrothermal vent in the Pacific Ocean. It grows at 194 degrees Fahrenheit (90 Celsius) on hydrogen, sulfur and iron. Lin et al., 2016/The Microbiology Society
Oxygen is typically deadly for these organisms, so we grow them in synthetic hydrothermal fluid and in sealed tubes or in large bioreactors free of oxygen. This way, we can control the temperature and chemical conditions they need for growth.
From these experiments, we look for distinguishing chemical signals that these organisms produce which spacecraft or instruments that land on extraterrestrial surfaces could potentially detect.
We also create computer models that best describe how we think these microbes grow and compete with other organisms in hydrothermal vents. We can apply these models to conditions we think existed on early Earth or on ocean worlds to see how these microbes might fare under those conditions.
We then analyze the proteins from the thermophiles we collect to understand how these organisms function and adapt to changing environmental conditions. All this information guides our understanding of how life can exist in extreme environments on and beyond Earth.
Uses for thermophiles in biotechnology
In addition to providing helpful information to planetary scientists, research on thermophiles provides other benefits as well. Many of the proteins in thermophiles are new to science and useful for biotechnology.
The best example of this is an enzyme called DNA polymerase, which is used to artificially replicate DNA in the lab by the polymerase chain reaction. The DNA polymerase first used for polymerase chain reaction was purified from the thermophilic bacterium Thermus aquaticus in 1976. This enzyme needs to be heat resistant for the replication technique to work. Everything from genome sequencing to clinical diagnoses, crime solving, genealogy tests and genetic engineering uses DNA polymerase.
DNA polymerase is an enzyme that plays an essential role in DNA replication. A heat-resistant form from thermophiles is useful in bioengineering. Christinelmiller/Wikimedia Commons, CC BY-SA
My lab and others are exploring how thermophiles can be used to degrade waste and produce commercially useful products. Some of these organisms grow on waste milk from dairy farms and brewery wastewater – materials that cause fish kills and dead zones in ponds and bays. The microbes then produce biohydrogen from the waste – a compound that can be used as an energy source.
Hydrothermal vents are among the most fascinating and unusual environments on Earth. With them, windows to the first life on Earth and beyond may lie at the bottom of our oceans.
But behind the spirit’s flash of marketing and growing popularity lies a rarely asked question: Where did the knowledge to distill agave come from in the first place?
In recent years, scholars studying how Indigenous communities responded to colonialism and global trade networks have begun to look more closely at the Pacific world. One key focus is the Manila-Acapulco galleon trade route, which linked Asia and the Americas for 250 years, from 1565 to 1815.
After Spain colonized the Philippines in 1565, Spanish galleons – towering, multidecked sailing ships – carried Chinese silk and Mexican silver across the ocean. But far more than goods traveled aboard those ships. They moved people, ideas and technologies.
For centuries, the rise of tequila has been credited to the Spanish. After the conquest of Mexico in the 16th century, colonizers introduced alembic stills, which are based on Moorish and Arabic technology. Unlike simple boiling, distillation requires managing heat and capturing purified vapor. These stills represented a major technological leap, allowing people to transform fermented drinks into distilled spirits.
Agave, long used to make the fermented drink pulque, soon became the base for something new: tequila and mezcal.
Colonial records, including the “Relaciones Geográficas,” a massive data-gathering project initiated by the Spanish Crown in the late 16th century, describe local Mesoamerican communities learning distillation from Spanish settlers. This version is well documented. But it assumes that technology moved in only one direction, from Europe to the Americas.
A second idea suggests that Mesoamerican communities already had some understanding of vapor condensation. Archaeologists have found ceramic vessels in western Mexico that may have been used to capture steam. While distillation requires additional steps, this prior knowledge may have primed Indigenous groups to more readily adopt new techniques.
A third perspective, which other researchers and I are exploring, traces a potential Filipino influence. The galleon trade brought thousands of Filipino sailors and laborers to Mexico, particularly along the Pacific coast. In places such as Guerrero, Colima and Jalisco, Filipino migrants introduced methods for fermenting and distilling coconut sap into lambanog, the coconut-based spirit.
The stills they used, sometimes called Mongolian stills, were built with clay and bamboo and included a condensation bowl. Historian Pablo Guzman-Rivas has noted that these stills more closely resemble the earliest Mexican agave distillation setups than European alembics. He has also documented oral traditions in some coastal Mexican communities to link local distillation practices to their Filipino ancestors.
The still on the left in Jalisco, Mexico, has similarities to the lambanog on the right from Infanta, Quezon, Philippines. Photo on left courtesy of Patricia Colunga-GarcíaMarín and Daniel Zizumbo-Villarreal; photo on right courtesy of Sherry Ann Angeles and Rading Coronacion, CC BY-SA
Beyond the bottle
Filipino influence extends beyond the distilling pot.
In Colima and other Pacific port towns, traces of the Manila galleon trade ripple through daily life – in kitchens, cantinas and even in architecture. The word “palapa,” used in Mexico and Central America today to describe rustic thatched roofs, is exactly the same as the term for coconut fronds that’s primarily used in the Bicol Region of the Philippines.
Filipino migrants in Mexico also shared knowledge of boatbuilding, fermentation and food preservation. Coconut vinegar, fish sauce and palm sugar-based condiments became part of Mexican cuisine. One of the most enduring legacies is tuba, the fermented coconut sap still popular in coastal areas of the Mexican state of Guerrero, where Filipino sailors once settled. Known locally by the same name, tuba is sold in markets and along roadsides, often enjoyed as a refreshing drink or as a cooking ingredient.
A replica of a galleon, the Spanish trading ship that traversed the world’s oceans from the 16th century to the 18th century. Dennis Jarvis/flickr, CC BY-SA
Exchange moved both ways. Filipino vessels carried corn, peanuts, sweet potatoes and cacao back across the Pacific, reshaping food in the Philippines. These exchanges took place under the shadow of colonialism and forced labor, but their legacies endure in language, in taste and even in the roofs over people’s heads.
Technical knowledge rarely travels through official channels alone. It moves with cooks in ship galleys, with carpenters below deck, with laborers who desert ships to settle in unfamiliar ports. Sometimes it was a way to build a roof or preserve a flavor. Other times, it was a method for turning a fermented plant into a spirit that could keep for long voyages. And by the early 1600s, new types of distilled agave spirits were being made in Mexico.
Tequila is unmistakably a product of Mexico. But it is also a product of movement. Whether Filipino migrants directly introduced distillation methods or whether they emerged from a mix of Indigenous experimentation and European tools, every time you sip tequila, you’re tasting an echo of those long ocean crossings from many centuries ago.
Stephen Acabado receives funding from the Henry Luce Foundation and the National Science Foundation.
Now, a new study that we conducted with a team of colleagues suggests that dogs might have a deeper and more biologically complex effect on humans than scientists previously believed. And this complexity may have profound implications for human health.
How stress works
The human response to stress is a finely tuned and coordinated set of various physiological pathways. Previous studies of the effects of dogs on human stress focused on just one pathway at a time. For our study, we zoomed out a bit and measured multiple biological indicators of the body’s state, or biomarkers, from both of the body’s major stress pathways. This allowed us to get a more complete picture of how a dog’s presence affects stress in the human body.
When a person experiences a stressful event, the SAM axis acts quickly, triggering a “fight or flight” response that includes a surge of adrenaline, leading to a burst of energy that helps us meet threats. This response can be measured through an enzyme called alpha-amylase.
At the same time, but a little more slowly, the HPA axis activates the adrenal glands to produce the hormone cortisol. This can help a person meet threats that might last for hours or even days. If everything goes well, when the danger ends, both axes settle down, and the body goes back to its calm state.
While stress can be an uncomfortable feeling, it has been important to human survival. Our hunter-gatherer ancestors had to respond effectively to acute stress events like an animal attack. In such instances, over-responding could be as ineffective as under-responding. Staying in an optimal stress response zone maximized humans’ chances of survival.
After cortisol is released by the adrenal glands, it eventually makes its way into your saliva, making it an easily accessible biomarker to track responses. Because of this, most research on dogs and stress has focused on salivary cortisol alone.
While these studies have shown that having a dog nearby can lower cortisol levels during a stressful event, suggesting the person is calmer, we suspected that was just part of the story.
What our study measured
For our study, we recruited about 40 dog owners to participate in a 15-minute gold standard laboratory stress test. This involves public speaking and oral math in front of a panel of expressionless people posing as behavioral specialists.
The participants were randomly assigned to bring their dogs to the lab with them or to leave their dogs at home. We measured cortisol in blood samples taken before, immediately after and about 45 minutes following the test as a biomarker of HPA axis activity. And unlike previous studies, we also measured the enzyme alpha-amylase in the same blood samples as a biomarker of the SAM axis.
As expected based on previous studies, the people who had their dog with them showed lower cortisol spikes. But we also found that people with their dog experienced a clear spike of alpha-amylase, while those without their dog showed almost no response.
No response may sound like a good thing, but in fact, a flat alpha-amylase response can be a sign of a dysregulated response to stress, often seen in people experiencing high stress responses, chronic stress or even PTSD. This lack of response is caused by chronic or overwhelming stress that can change how our nervous system responds to stressors.
In contrast, the participants with their dogs had a more balanced response: Their cortisol didn’t spike too high, but their alpha-amylase still activated. This shows that they were alert and engaged throughout the test, then able to return to normal within 45 minutes. That’s the sweet spot for handling stress effectively. Our research suggests that our canine companions keep us in a healthy zone of stress response.
Having a dog benefits humans’ physical and psychological health.
Dogs and human health
This more nuanced understanding of the biological effects of dogs on the human stress response opens up exciting possibilities. Based on the results of our study, our team has begun a new study using thousands of biomarkers to delve deeper into the biology of how psychiatric service dogs reduce PTSD in military veterans.
But one thing is already clear: Dogs aren’t just good company. They might just be one of the most accessible and effective tools for staying healthy in a stressful world.
Kevin Morris receives funding for this research from the Morris Animal Foundation, the Human-Animal Bond Research Institute, and the University of Denver.
Jaci Gandenberger receives funding from the University of Denver to support this research.