Blocking exports and raising tariffs is a bad defense against industrial cyber espionage, study shows

Source: – By William Akoto, Assistant Professor of Global Security, American University

Cutting off China’s access to advanced U.S. chips is likely to motivate Chinese cyber espionage. kritsapong jieantaratip/iStock via Getty Images

The United States is trying to decouple its economy from rivals like China. Efforts toward this include policymakers raising tariffs on Chinese goods, blocking exports of advanced technology and offering subsidies to boost American manufacturing. The goal is to reduce reliance on China for critical products in hopes that this will also protect U.S. intellectual property from theft.

The idea that decoupling will help stem state-sponsored cyber-economic espionage has become a key justification for these measures. For instance, then-U.S. Trade Representative Katherine Tai framed the continuation of China-specific tariffs as serving the “statutory goal to stop [China’s] harmful … cyber intrusions and cyber theft.” Early tariff rounds during the first Trump administration were likewise framed as forcing Beijing to confront “deeply entrenched” theft of U.S. intellectual property.

This push to “onshore” key industries is driven by very real concerns. By some estimates, theft of U.S. trade secrets, often through hacking – costs the American economy hundreds of billions of dollars per year. In that light, decoupling is a defensive economic shield – a way to keep vital technology out of an adversary’s reach.

But will decoupling and cutting trade ties truly make America’s innovations safer from prying eyes? I’m a political scientist who studies state-sponsored cyber espionage, and my research suggests that the answer is a definitive no. Indeed, it might actually have the opposite effect.

To understand why, it helps to look at what really drives state-sponsored hacking.

Rivalry, not reliance

Intuitively, you might think a country is most tempted to steal secrets from a nation it depends on. For example, if Country A must import jet engines or microchips from Country B, Country A might try to hack Country B’s companies to copy that technology and become self-sufficient. This is the industrial dependence theory of cyber theft.

There is some truth to this motive. If your economy needs what another country produces, stealing that know-how can boost your own industries and reduce reliance. However, in a recent study, I show that a more powerful predictor of cyber espionage is industrial similarity. Countries with overlapping advanced industries such as aerospace, electronics or pharmaceuticals are the ones most likely to target each other with cyberattacks.

Why would having similar industries spur more spying? The reason is competition. If two nations both specialize in cutting-edge sectors, each has a lot to gain by stealing the other’s innovations.

If you’re a tech powerhouse, you have valuable secrets worth stealing, and you have the capability and motivation to steal others’ secrets. In essence, simply trading with a rival isn’t the core issue. Rather, it’s the underlying technological rivalry that fuels espionage.

For example, a cyberattack in 2012 targeted SolarWorld, a U.S. solar panel manufacturer, and the perpetrators stole the company’s trade secrets. Chinese solar companies then developed competing products based on the stolen designs, costing SolarWorld millions in lost revenue. This is a classic example of industrial similarity at work. China was building its own solar industry, so it hacked a U.S. rival to leapfrog in technology.

China has made major investments in its cyber-espionage capabilities.

Boosting trade barriers can fan the flames

Crucially, cutting trade ties doesn’t remove this rivalry. If anything, decoupling might intensify it. When the U.S. and China exchange tariff blows or cut off tech transfers, it doesn’t make China give up – it likely pushes Chinese intelligence agencies to work even harder to steal what they can’t buy.

This dynamic isn’t unique to China. Any country that suddenly loses access to an important technology may turn to espionage as Plan B.

History provides examples. When South Africa was isolated by sanctions in the 1980s, it covertly obtained nuclear weapons technology. Similarly, when Israel faced arms embargoes in the 1960s, it engaged in clandestine efforts to get military technology. Isolation can breed desperation, and hacking is a low-cost, high-reward tool for the desperate.

If decoupling won’t end cyber espionage, what will?

There’s no easy fix for state-sponsored hacking as long as countries remain locked in high-tech competition. However, there are steps that can mitigate the damage and perhaps dial down the frequency of these attacks.

One is investing in cyber defense. Just as a homeowner adds locks and alarms after a burglary, companies and governments should continually strengthen their cyber defenses. Assuming that espionage attempts are likely to happen is key. Advanced network monitoring, employee training against phishing, and robust encryption can make it much harder for hackers to succeed, even if they keep trying.

Another is building resilience and redundancy. If you know that some secrets might get stolen, plan for it. Businesses can shorten product development cycles and innovate faster so that even if a rival copies today’s tech, you’re already moving on to the next generation. Staying ahead of thieves is a form of defense, too.

Ultimately, rather than viewing tariffs and export bans as silver bullets against espionage, U.S. leaders and industry might be safer focusing on resilience and stress-testing cybersecurity firms. Make it harder for adversaries to steal secrets, and less rewarding even if they do.

The Conversation

William Akoto does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Blocking exports and raising tariffs is a bad defense against industrial cyber espionage, study shows – https://theconversation.com/blocking-exports-and-raising-tariffs-is-a-bad-defense-against-industrial-cyber-espionage-study-shows-258243

Checking in on New England’s fishing industry 25 Years after ‘The Perfect Storm’ hit movie theaters

Source: – By Stephanie Otts, Director of National Sea Grant Law Center, University of Mississippi

Filming ‘The Perfect Storm’ in Gloucester Harbor, Mass.
The Salem News Historic Photograph Collection, Salem State University Archives and Special Collections, CC BY

Twenty-five years ago, “The Perfect Storm” roared into movie theaters. The disaster flick, starring George Clooney and Mark Wahlberg, was a riveting, fictionalized account of commercial swordfishing in New England and a crew who went down in a violent storm.

The anniversary of the film’s release, on June 30, 2000, provides an opportunity to reflect on the real-life changes to New England’s commercial fishing industry.

Fishing was once more open to all

In the true story behind the movie, six men lost their lives in late October 1991 when the commercial swordfishing vessel Andrea Gail disappeared in a fierce storm in the North Atlantic as it was headed home to Gloucester, Massachusetts.

At the time, and until very recently, almost all commercial fisheries were open access, meaning there were no restrictions on who could fish.

There were permit requirements and regulations about where, when and how you could fish, but anyone with the means to purchase a boat and associated permits, gear, bait and fuel could enter the fishery. Eight regional councils established under a 1976 federal law to manage fisheries around the U.S. determined how many fish could be harvested prior to the start of each fishing season.

People and barrels of fish fill a wharf area in a historical black-and-white image.
Fishing has been an integral part of coastal New England culture since its towns were established. In this 1899 photo, a New England community weighs and packs mackerel.
Charles Stevenson/Freshwater and Marine Image Bank

Fishing started when the season opened and continued until the catch limit was reached. In some fisheries, this resulted in a “race to the fish” or a “derby,” where vessels competed aggressively to harvest the available catch in short amounts of time. The limit could be reached in a single day, as happened in the Pacific halibut fishery in the late 1980s.

By the 1990s, however, open access systems were coming under increased criticism from economists as concerns about overfishing rose.

The fish catch peaked in New England in 1987 and would remain far above what the fish population could sustain for two more decades. Years of overfishing led to the collapse of fish stocks, including North Atlantic cod in 1992 and Pacific sardine in 2015.

As populations declined, managers responded by cutting catch limits to allow more fish to survive and reproduce. Fishing seasons were shortened, as it took less time for the fleets to harvest the allowed catch. It became increasingly hard for fishermen to catch enough fish to earn a living.

Saving fisheries changed the industry

In the early 2000s, as these economic and environmental challenges grew, fisheries managers started limiting access. Instead of allowing anyone to fish, only vessels or individuals meeting certain eligibility requirements would have the right to fish.

The most common method of limiting access in the U.S. is through limited entry permits, initially awarded to individuals or vessels based on previous participation or success in the fishery. Another approach is to assign individual harvest quotas or “catch shares” to permit holders, limiting how much each boat can bring in.

In 2007, Congress amended the 1976 Magnuson-Stevens Fishery Conservation and Management Act to promote the use of limited access programs in U.S. fisheries.

Three fishing vessels, side by side, in New Bedford Harbor
Ships in the fleet out of New Bedford, Mass.
Henry Zbyszynski/Flickr, CC BY

Today, limited access is common, and there are positive signs that the management change is helping achieve the law’s environmental goal of preventing overfishing. Since 2000, the populations of 50 major fishing stocks have been rebuilt, meaning they have recovered to a level that can once again support fishing.

I’ve been following the changes as a lawyer focused on ocean and coastal issues, and I see much work still to be done.

Forty fish stocks are currently being managed under rebuilding plans that limit catch to allow the stock to grow, including Atlantic cod, which has struggled to recover due to a complex combination of factors, including climatic changes.

The lingering effect on communities today

While many fish stocks have recovered, the effort came at an economic cost to many individual fishermen. The limited-access Northeast groundfish fishery, which includes Atlantic cod, haddock and flounder, shed nearly 800 crew positions between 2007 and 2015.

The loss of jobs and revenue from fishing impacts individual family income and relationships, strains other businesses in fishing communities, and affects those communities’ overall identity and resilience, as illustrated by a recent economic snapshot of the Alaska seafood industry.

When original limited-access permit holders leave the business – for economic, personal or other reasons – their permits are either terminated or sold to other eligible permit holders, leading to fewer active vessels in the fleet. As a result, the number of vessels fishing for groundfish has declined from 719 in 2007 to 194 in 2023, meaning fewer jobs.

A fisherman wearing thick gloves lifts a tray of fish, with boats in the background.
A fisherman unloads a portion of his catch for the day of 300 pounds of groundfish, including flounder, in January 2006 in Gloucester, Mass.
AP Photo/Lisa Poole

Because of their scarcity, limited-access permits can cost upward of US$500,000, which is often beyond the financial means of a small businesses or a young person seeking to enter the industry. The high prices may also lead retiring fishermen to sell their permits, as opposed to passing them along with the vessels to the next generation.

These economic forces have significantly altered the fishing industry, leading to more corporate and investor ownership, rather than the family-owned operations that were more common in the Andrea Gail’s time.

Similar to the experience of small family farms, fishing captains and crews are being pushed into corporate arrangements that reduce their autonomy and revenues.

Consolidation can threaten the future of entire fleets, as New Bedford, Massachusetts, saw when Blue Harvest Fisheries, backed by a private equity firm, bought up vessels and other assets and then declared bankruptcy a few years later, leaving a smaller fleet and some local business and fishermen unpaid for their work. A company with local connections bought eight vessels from Blue Harvest along with 48 state and federal permits the company held.

New challenges and unchanging risks

While there are signs of recovery for New England’s fisheries, challenges continue.

Warming water temperatures have shifted the distribution of some species, affecting where and when fish are harvested. For example, lobsters have moved north toward Canada. When vessels need to travel farther to find fish, that increases fuel and supply costs and time away from home.

Fisheries managers will need to continue to adapt to keep New England’s fisheries healthy and productive.

One thing that, unfortunately, hasn’t changed is the dangerous nature of the occupation. Between 2000 and 2019, 414 fishermen died in 245 disasters.

The Conversation

Stephanie Otts receives funding from the NOAA National Sea Grant College Program through the U.S. Department of Commerce. Previous support for fisheries management legal research provided by The Nature Conservancy.

ref. Checking in on New England’s fishing industry 25 Years after ‘The Perfect Storm’ hit movie theaters – https://theconversation.com/checking-in-on-new-englands-fishing-industry-25-years-after-the-perfect-storm-hit-movie-theaters-255076

Why power skills – formerly known as ‘soft skills’ – are the key to business success

Source: – By Sandra Sjoberg, Vice President and Dean, Academic Programs, Western Governors University School of Business

What does it take to lead through complexity, make tough decisions and still put people first? For me, the answer became clear during a defining moment early in my career – one that changed my path entirely.

Today I am a business-school educator, but I began my career in the corporate world. I faced a challenge so intense that it motivated me to go back to school and earn a Ph.D. so I could help others lead with greater purpose and humanity.

Back then, I was working for a multinational home goods company, and I was asked to play a role in closing a U.S. factory in the Midwest and moving its operations abroad. It was, by every business metric, the right economic decision. Without it, the company couldn’t stay competitive. Still, the move was fraught with emotional and ethical complexities.

Witnessing the toll on employees who lost their jobs, and the broader effects on their community, changed how I thought about business decision-making. I saw that technical skills alone aren’t enough. Effective leadership also requires emotional intelligence, ethical reasoning and human-centered thinking.

That experience was a turning point, leading me to higher education. I wanted to fulfill a greater purpose by equipping future business leaders with critical human-centric skills. And to do that, I needed to learn more about these skills – why they matter, how they shape outcomes, and how we can teach them more effectively.

Often called “soft skills” or “people skills,” these are also, more appropriately, referred to as “power skills” or “durable skills.” And they aren’t just nice to have. As my own experience shows and as research confirms, they are central to success in today’s business world.

Power skills: Underappreciated, yet in demand

Research on power skills dates back to at least 1918, when the Carnegie Foundation published A Study of Engineering Education. That report concluded that 85% of engineering professionals’ success came from having well-developed people skills, and only 15% was attributed to “hard skills.” These early findings helped shape our understanding of the value of nontechnical skills and traits.

Today, employers arguably value these skills more than ever. But while demand for these skills is growing across industries, there’s not enough supply. For example, nearly 7 in 10 U.S. employers plan to prioritize hiring candidates with “soft” or “power” skills, according to LinkedIn’s most recent Global Talent Trends report.

Yet 65% of employers cite soft skills as the top gap among new graduates, according to Coursera’s 2025 Micro-Credentials Impact Report. New hires are struggling in the areas of communication, active listening, resilience and adaptability, the survey found.

Power skills are transferable across roles, projects and industries, which makes them especially valuable to hiring managers. And research continues to show that these skills drive innovation, strengthen team dynamics and help organizations navigate uncertainty — key reasons why employers prioritize them.

Three power skills to prioritize

So what does it look like to lead with power skills? Here are three key areas that have shaped my own journey — and that I now help others develop:

Adaptability: Adaptability goes beyond simply accepting change. It’s the ability to think, feel and act effectively when the situation changes – which, in today’s business environment, is all the time.

Consider a company expanding into a new international market. To succeed, it must invest in cultural research, adapt its operations to regional norms and align with local regulations – demonstrating adaptability at both strategic and operational levels.

That’s why adaptability is one of the most in-demand skills among employers, according to a recent LinkedIn study. Adaptable workforces are better equipped to respond to shifting demands. And with the rise of artificial intelligence and rapid tech disruption, organizations need agile, resilient employees more than ever.

Empathy: As I learned firsthand during my time in the corporate world, empathy – or the ability to understand and respond to the feelings, perspectives and needs of others – is essential.

Empathy not only fosters trust and respect, but it also helps leaders make decisions that balance organizational goals with human needs. More broadly, empathetic leaders create inclusive environments and build stronger relationships.

At Western Governors University, we have an entire course titled “Empathy and Inclusive Collaboration,” which teaches skills in active listening, creating culturally safe environments and cultivating an inclusive mindset.

Inclusivity: Effective communication and teamwork consistently rank high as essential workforce skills. This is because organizations that excel in communication and collaboration are more likely to innovate, adapt to change and make informed decisions.

While managing a global transition, I saw how hard and necessary it was to listen across cultural lines, to foster collaboration across borders and departments. When teams collaborate well, they bring diverse perspectives that can foster creativity and efficiency. The ability to communicate openly and work together is crucial for navigating complex problems and driving organizational success.

The business landscape is evolving rapidly, and technical expertise alone is no longer enough to drive success. Power skills like adaptability, empathy and inclusivity are crucial, as both research and my own experiences have taught me. By prioritizing power skills, educators and businesses can better prepare leaders to navigate complexity, lead with purpose and thrive in a constantly changing world.

The Conversation

Sandra Sjoberg is affiliated with Western Governors University.
Sandra Sjoberg is a member of the industry association, American Marketing Association.
Sandra Sjoberg was a former employee at Amerock, a division of Newell Rubbermaid that, while not mentioned directly in the article, is the basis for the corporate experience shared in the article.

ref. Why power skills – formerly known as ‘soft skills’ – are the key to business success – https://theconversation.com/why-power-skills-formerly-known-as-soft-skills-are-the-key-to-business-success-257310

Michelin Guide scrutiny could boost Philly tourism, but will it stifle chefs’ freedom to experiment and innovate?

Source: – By Jonathan Deutsch, Professor of Food and Hospitality Management, Drexel University

Chef Phila Lorn prepares a bowl of noodle soup at Mawn restaurant in Philadelphia. AP Photo/Matt Rourke

The Philadelphia restaurant scene is abuzz with the news that the famed Michelin Guide is coming to town.

As a research chef and educator at Drexel University in Philadelphia, I am following the Michelin developments closely.

Having eaten in Michelin restaurants in other cities, I am confident that Philly has at least a few star-worthy restaurants. Our innovative dining scene was named one of the top 10 in the U.S. by Food & Wine in 2025.

Researchers have convincingly shown that Michelin ratings can boost tourism, so Philly gaining some starred restaurants could bring more revenue for the city.

But as the lead author of the textbook “Culinary Improvisation,” which teaches creativity, I also worry the Michelin scrutiny could make chefs more focused on delivering a consistent experience than continuing along the innovative trajectory that attracts Michelin in the first place.

Ingredients for culinary innovation

In “Culinary Improvisation” we discuss three elements needed to foster innovation in the kitchen.

The first is mastery of culinary technique, both classical and modern. Simply stated, this refers to good cooking.

The second is access to a diverse range of ingredients and flavors. The more colors the artist has on their palette, the more directions the creation can take.

And the third, which is key to my concerns, is a collaborative and supportive environment where chefs can take risks and make mistakes. Research shows a close link between risk-taking workplaces and innovation.

According to the Michelin Guide, stars are awarded to outstanding restaurants based on: “quality of ingredients, mastery of cooking techniques and flavors, the personality of the chef as expressed in the cuisine, value for money, and consistency of the dining experience both across the menu and over time.”

The criteria do not mention innovation.

It’s possible the high-stakes lure of a Michelin star, which awards consistent excellence, could lead Philly’s most vibrant and creative chefs and restaurateurs to pull back on the risks that led to the city’s culinary excellence in the first place.

A line of chefs wearing black aprons at work in an open kitchen.
Local food writers believe Vernick Fish is a top contender for a Michelin star.
Photo courtesy of Vernick Fish

The obvious contenders

Philadelphia’s preeminent restaurant critic Craig LaBan and journalist and former restaurateur Kiki Aranita discussed local contenders for Michelin stars in a recent article in the Philadelphia Inquirer.

The 19 restaurants LaBan and Aranita discuss as possible star contenders average just over a one-mile walk from the Pennsylvania Convention Center.

Together they have received 78 James Beard nominations or awards, which are considered the “Oscars” of the food industry. That’s an average of over four per restaurant.

And when I tried to book a table for two on a Wednesday and Saturday before 9 p.m., about half were already fully booked for dinner two weeks out, in July, which is the slow season for dining in Philadelphia.

If LaBan’s and Aranita’s predictions are right, Michelin will be an added recognition for restaurants that are already successful and centrally located.

Exterior shot of a restaurant with outdoor seating in ground floor of rowhome
Black Dragon Takeout fuses Black American cuisine with the aesthetics of classic Chinese American takeout.
Jeff Fusco/The Conversation, CC BY-SA

Off the beaten path

When the Michelin Guide started in France at the turn of the 19th century, it encouraged diners to take the road less traveled to their next gastronomic experience.

It has since evolved into recommendations for a road well traveled: safe, lauded and already hard-to-get-into restaurants. In Philly these could be restaurants such as Vetri Cucina, Zahav, Vernick Fish, Provenance, Royal Sushi and Izakaya, Ogawa and Friday Saturday Sunday, to name a few on LaBan and Aranita’s list.

And yet Philadelphia has over 6,000 restaurants spread across 135 square miles of the city. Philadelphia is known as a city of neighborhoods, and these neighborhoods are rich with food diversity and innovation.

Consider Jacob Trinh’s Vietnamese-tinged seafood tasting menu at Little Fish in Queen Village; Kurt Evans’ gumbo lo mein at Black Dragon Takeout in West Philly; the beef cheek confit with avocado mousse at Temir Satybaldiev’s Ginger in the Northeast; and the West African XO sauce at Honeysuckle, owned by Omar Tate and Cybille St.Aude-Tate, on North Broad Street.

I hope the Michelin inspectors will venture far beyond the obvious candidates to experience more of what Philadelphia has to offer.

Small stacks of red hardback books that say 'Michelin France 2025'
The Michelin Guide announced it will include Philadelphia and Boston in its next Northeast Cities edition.
Matthieu Delaty/Hans Lucas/AFP via Getty Images

Raising the bar

In the frenzy surrounding the Michelin scrutiny, chef friends have invited me to dine at their restaurants and share my feedback as they refine their menus in anticipation of visits from anonymous Michelin inspectors.

Restaurateurs have been asking my colleagues and me for talent suggestions to replace well-liked and capable cooks, servers and managers whom owners perceive to be just not Michelin-star level.

And managers are texting us names of suspected reviewers, triggered by some tell-tale signs – a solo diner with a weeknight tasting menu reservation, no dietary restrictions or special requests, and a conspicuously light internet presence.

In all, I am excited about Philadelphians being excited about Michelin. Any opportunity to spotlight the city’s restaurant community and tighten its food and service quality raises the bar among local chefs and restaurateurs and makes the experience better for diners. And the prospect of business travelers and culinary tourists enjoying lunches and early-week dinners can help restaurants, their workers and the city earn more revenue.

But in the din of the press events and hype, let’s not forget that Philadelphians don’t need an outside arbiter to tell us what we already know: Philly is a great place to eat and drink.

Read more of our stories about Philadelphia.

The Conversation

Jonathan Deutsch does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Michelin Guide scrutiny could boost Philly tourism, but will it stifle chefs’ freedom to experiment and innovate? – https://theconversation.com/michelin-guide-scrutiny-could-boost-philly-tourism-but-will-it-stifle-chefs-freedom-to-experiment-and-innovate-256752

What if universal rental assistance were implemented to deal with the housing crisis?

Source: – By Alex Schwartz, Professor of Urban Policy, The New School

Thousands of American families that can’t find affordable apartments are stuck living in extended-stay motels. Michael S. Williamson/The Washington Post via Getty Images

If there’s one thing that U.S. politicians and activists from across the political spectrum can agree on, it’s that rents are far too high.

Many experts believe that this crisis is fueled by a shortage of housing, caused principally by restrictive regulations.

Rents and home prices would fall, the argument goes, if rules such as minimum lot- and house-size requirements and prohibitions against apartment complexes were relaxed. This, in turn, would make it easier to build more housing.

As experts on housing policy, we’re concerned about housing affordability. But our research shows little connection between a shortfall of housing and rental affordability problems. Even a massive infusion of new housing would not shrink housing costs enough to solve the crisis, as rents would likely remain out of reach for many households.

However, there are already subsidies in place that ensure that some renters in the U.S. pay no more than 30% of their income on housing costs. The most effective solution, in our view, is to make these subsidies much more widely available.

A financial sinkhole

Just how expensive are rents in the U.S.?

According to the U.S. Department of Housing and Urban Development, a household that spends more than 30% of its income on housing is deemed to be cost-burdened. If it spends more than 50%, it’s considered severely burdened. In 2023, 54% of all renters spent more than 30% of their pretax income on housing. That’s up from 43% of renters in 1999. And 28% of all renters spent more than half their income on housing in 2023.

Renters with low incomes are especially unlikely to afford their housing: 81% of renters making less than $30,000 spent more than 30% of their income on housing, and 60% spent more than 50%.

Estimates of the nation’s housing shortage vary widely, reaching up to 20 million units, depending on analytic approach and the time period covered. Yet our research, which compares growth in the housing stock from 2000 to the present, finds no evidence of an overall shortage of housing units. Rather, we see a gap between the number of low-income households and the number of affordable housing units available to them; more affluent renters face no such shortage. This is true in the nation as a whole and in nearly all large and small metropolitan areas.

Would lower rents help? Certainly. But they wouldn’t fix everything.

We ran a simulation to test an admittedly unlikely scenario: What if rents dropped 25% across the board? We found it would reduce the number of cost-burdened renters – but not by as much as you might think.

Even with the reduction, nearly one-third of all renters would still spend more than 30% of their income on housing. Moreover, reducing rents would help affluent renters much more than those with lower incomes – the households that face the most severe affordability challenges.

The proportion of cost-burdened renters earning more than $75,000 would fall from 16% to 4%, while the share of similarly burdened renters earning less than $15,000 would drop from 89% to just 80%. Even with a rent rollback of 25%, the majority of renters earning less than $30,000 would remain cost-burdened.

Vouchers offer more breathing room

Meanwhile, there’s a proven way of making housing more affordable: rental subsidies.

In 2024, the U.S. provided what are known as “deep” housing subsidies to about 5 million households, meaning that rent payments are capped at 30% of their income.

These subsidies take three forms: Housing Choice Vouchers that enable people to rent homes in the private market; public housing; and project-based rental assistance, in which the federal government subsidizes the rents for all or some of the units in properties under private and nonprofit ownership.

The number of households participating in these three programs has increased by less than 2% since 2014, and they constitute only 25% of all eligible households. Households earning less than 50% of their area’s median family income are eligible for rental assistance. But unlike Social Security, Medicare or food stamps, rental assistance is not an entitlement available to all who qualify. The number of recipients is limited by the amount of funding appropriated each year by Congress, and this funding has never been sufficient to meet the need.

By expanding rental assistance to all eligible low-income households, the government could make huge headway in solving the rental affordability crisis. The most obvious option would be to expand the existing Housing Choice Voucher program, also known as Section 8.

The program helps pay the rent up to a specified “payment standard” determined by each local public housing authority, which can set this standard at between 80% and 120% of the HUD-designated fair market rent. To be eligible for the program, units must also satisfy HUD’s physical quality standards.

Unfortunately, about 43% of voucher recipients are unable to use it. They are either unable to find an apartment that rents for less than the payment standard, meets the physical quality standard, or has a landlord willing to accept vouchers.

Renters are more likely to find housing using vouchers in cities and states where it’s illegal for landlords to discriminate against voucher holders. Programs that provide housing counseling and landlord outreach and support have also improved outcomes for voucher recipients.

However, it might be more effective to forgo the voucher program altogether and simply give eligible households cash to cover their housing costs. The Philadelphia Housing Authority is currently testing out this approach.

The idea is that landlords would be less likely to reject applicants receiving government support if the bureaucratic hurdles were eliminated. The downside of this approach is that it would not prevent landlords from renting out deficient units that the voucher program would normally reject.

Homeowners get subsidies – why not renters?

Expanding rental assistance to all eligible low-income households would be costly.

The Urban Institute, a nonpartisan think tank, estimates it would cost about $118 billion a year.

However, Congress has spent similar sums on housing subsidies before. But they involve tax breaks for homeowners, not low-income renters. Congress forgoes billions of dollars annually in tax revenue it would otherwise collect were it not for tax deductions, credits, exclusions and exemptions. These are known as tax expenditures. A tax not collected is equivalent to a subsidy payment.

Silhouette of older man standing at sliding glass door.
Only about 25% of eligiblge households receive rental assistance from the federal government.
Luis Sinco/Los Angeles Times via Getty Images

For example, from 1998 through 2017 – prior to the tax changes enacted by the first Trump administration in 2017 – the federal government annually sacrificed $187 billion on average, after inflation, in revenue due to mortgage interest deductions, deductions for state and local taxes, and for the exemption of proceeds from the sale of one’s home from capital gains taxes. In fiscal year 2025, these tax expenditures totaled $95.4 billion.

Moreover, tax expenditures on behalf of homeowners flow mostly to higher-income households. In 2024, for example, over 70% of all mortgage-interest tax deductions went to homeowners earning at least $200,000.

Broadening the availability of rental subsidies would have other benefits. It would save federal, state and local governments billions of dollars in homeless services. Moreover, automatic provision of rental subsidies would reduce the need for additional subsidies to finance new affordable housing. Universal rental assistance, by guaranteeing sufficient rental income, would allow builders to more easily obtain loans to cover development costs.

Of course, sharply raising federal expenditures for low-income rental assistance flies in the face of the Trump administration’s priorities. Its budget proposal for the next fiscal year calls for a 44% cut of more than $27 billion in rental assistance and public housing.

On the other hand, if the government supported rental assistance in amounts commensurate with the tax benefits given to homeowners, it would go a long way toward resolving the rental housing affordability crisis.

This article is part of a series centered on envisioning ways to deal with the housing crisis.

The Conversation

Alex Schwartz has received funding from the Catherine and John D. MacArthur Foundation. Since 2019 he has served on New York City’s Rent Guidelines Board. He has a relative who works for The Conversation.

Kirk McClure received funding from the U.S. Department of Housing and Urban Development and receives funding from the National Science Foundation.

ref. What if universal rental assistance were implemented to deal with the housing crisis? – https://theconversation.com/what-if-universal-rental-assistance-were-implemented-to-deal-with-the-housing-crisis-257213

Trump administration aims to slash funds that preserve the nation’s rich architectural and cultural history

Source: – By Michael R. Allen, Visiting Assistant Professor of History, West Virginia University

The iconic ‘Walking Man’ Hawkes sign in Westbrook, Maine, was added to the National Register of Historic Places in 2019. Ben McCanna/Portland Portland Press Herald via Getty Images

President Donald Trump’s proposed fiscal year 2026 discretionary budget is called a “skinny budget” because it’s short on line-by-line details.

But historic preservation efforts in the U.S. did get a mention – and they might as well be skinned to the bone.

Trump has proposed to slash funding for the federal Historic Preservation Fund to only $11 million, which is $158 million less than the fund’s previous reauthorization in 2024. The presidential discretionary budget, however, always heads to Congress for appropriation. And Congress always makes changes.

That said, the Trump administration hasn’t even released the $188 million that Congress appropriated for the fund for the 2025 fiscal year, essentially impounding the funding stream that Congress created in 1976 for historic preservation activities across the nation.

I’m a scholar of historic preservation who’s worked to secure historic designations for buildings and entire neighborhoods. I’ve worked on projects that range from making distressed neighborhoods in St. Louis eligible for historic tax credits to surveying Cold War-era hangars and buildings on seven U.S. Air Force bases.

I’ve seen the ways in which the Historic Preservation Fund helps local communities maintain and rehabilitate their rich architectural history, sparing it from deterioration, the wrecking ball or the pressures of the private market.

A rare, deficit-neutral funding model

Most Americans probably don’t realize that the task of historic preservation largely falls to individual states and Native American tribes.

The National Historic Preservation Act that President Lyndon B. Johnson signed into law in 1966 requires states and tribes to handle everything from identifying potential historic sites to reviewing the impact of interstate highway projects on archaeological sites and historic buildings. States and tribes are also responsible for reviewing nominations of sites in the National Register of Historic Places, the nation’s official list of properties deemed worthy of preservation.

However, many states and tribes didn’t have the capacity to adequately tackle the mandates of the 1966 act. So the Historic Preservation Fund was formed a decade later to alleviate these costs by funneling federal resources into these efforts.

The fund is actually the product of a conservative, limited-government approach.

Created during Gerald Ford’s administration, it has a revenue-neutral model, meaning that no tax dollars pay for the program. Instead, it’s funded by private lease royalties from the Outer Continental Shelf oil and gas reserves.

Most of these reserves are located in federal waters in the Gulf of Mexico and off the coast of Alaska. Private companies that receive a permit to extract from them must agree to a lease with the federal government. Royalties from their oil and gas sales accrue in federally controlled accounts under the terms of these leases. The Office of Natural Resources Revenue then directs 1.5% of the total royalties to the Historic Preservation Fund.

Congress must continually reauthorize the amount of funding reserved for the Historic Preservation Fund, or it goes unfunded.

A plaque honoring Fenway Park is displayed on an easel on a baseball field.
Boston’s Fenway Park was added to the National Register of Historic Places in 2012, making it eligible for preservation grants and federal tax incentives.
Winslow Townson/Getty Images

Despite bipartisan support, the fund has been threatened in the past. President Ronald Reagan attempted to do exactly what Trump is doing now by making no request for funding at all in his 1983 budget. Yet the fund has nonetheless been reauthorized six times since its inception, with terms ranging from five to 10 years.

The program is a crucial source of funding, particularly in small towns and rural America, where privately raised cultural heritage funds are harder to come by. It provides grants for the preservation of buildings and geographical areas that hold historical, cultural or spiritual significance in underrepresented communities. And it’s even involved in projects tied to the nation’s 250th birthday in 2026, such as the rehabilitation of the home in New Jersey where George Washington was stationed during the winter of 1778-79 and the restoration of Rhode Island’s Old State House.

Filling financial gaps

I’ve witnessed the fund’s impact firsthand in small communities across the nation.

Edwardsville, Illinois, a suburb of St. Louis, is home to the Leclaire Historic District. In the 1970s, it was added to the National Register of Historic Places. The national designation recognized the historic significance of the district, protecting it against any adverse impacts from federal infrastructure funding. It also made tax credits available to the town. Edwardsville then designated LeClaire a local historic district so that it could legally protect the indelible architectural features of its homes, from original decorative details to the layouts of front porches.

Despite the designation, however, there was no clear inventory of the hundreds of houses in the district. A few paid staffers and a volunteer citizen commission not only had to review proposed renovations and demolitions, but they also had to figure out which buildings even contributed to LeClaire’s significance and which ones did not – and thus did not need to be tied up in red tape.

Black and white photo of family standing in front of their home.
The Allen House is one of approximately 415 single-family homes in the Leclaire neighborhood in Edwardsville, Ill.
Friends of Leclaire

Edwardsville was able to secure a grant through the Illinois State Historic Preservation Office thanks to a funding match enabled by money disbursed to Illinois via the Historic Preservation Fund.

In 2013, my team created an updated inventory of the historic district, making it easier for the local commission to determine which houses should be reviewed carefully and which ones don’t need to be reviewed at all.

Oil money better than no money

The historic preservation field, not surprisingly, has come out strongly against Trump’s proposal to defund the Historic Preservation Fund.

Nonetheless, there have been debates within the field over the fund’s dependence on the fossil fuel industry, which was the trade-off that preservationists made decades ago when they crafted the funding model.

In the 1970s, amid the national energy crisis, conservation of existing buildings was seen as a worthy ecological goal, since demolition and new construction required fossil fuels. To preservationists, diverting federal carbon royalties seemed like a power play.

But with the effects of climate change becoming impossible to ignore, some preservationists are starting to more openly critique both the ethics and the wisdom of tapping into a pool of money created through the profits of the oil and gas industry. I’ve recently wondered myself if continued depletion of fossil fuels means that preservationists won’t be able to count on the Historic Preservation Fund as a long-term source of funding.

That said, you’d be hard-pressed to find a preservationist who thinks that destroying the Historic Preservation Fund would be a good first step in shaping a more visionary policy.

For now, Trump’s administration has only sown chaos in the field of historic preservation. Already, Ohio has laid off one-third of the staffers in its State Historic Preservation Office due to the impoundment of federal funds. More state preservation offices may follow suit. The National Council of State Historic Preservation Officers predicts that states soon could be unable to perform their federally mandated duties.

Unfortunately, many people advocating for places important to their towns and neighborhoods may end up learning the hard way just what the Historic Preservation Fund does.

The Conversation

Michael R. Allen is a member of the Advisor Leadership Team of the National Trust for Historic Preservation.

ref. Trump administration aims to slash funds that preserve the nation’s rich architectural and cultural history – https://theconversation.com/trump-administration-aims-to-slash-funds-that-preserve-the-nations-rich-architectural-and-cultural-history-258889

3D-printed model of a 500-year-old prosthetic hand hints at life of a Renaissance amputee

Source: – By Heidi Hausse, Associate Professor of History, Auburn University

Technology is more than just mechanisms and design — it’s ultimately about people.
Adriene Simon/College of Liberal Arts, Auburn University, CC BY-SA

To think about an artificial limb is to think about a person. It’s an object of touch and motion made to be used, one that attaches to the body and interacts with its user’s world.

Historical artifacts of prosthetic limbs are far removed from this lived context. Their users are gone. They are damaged – deteriorated by time and exposure to the elements. They are motionless, kept on display or in museum storage.

Yet, such artifacts are rare direct sources into the lives of historical amputees. We focus on the tools amputees used in 16th- and 17th-century Europe. There are few records written from amputees’ perspectives at that time, and those that exist say little about what everyday life with a prosthesis was like.

Engineering offers historians new tools to examine physical evidence. This is particularly important for the study of early modern mechanical hands, a new kind of prosthetic technology that appeared at the turn of the 16th century. Most of the artifacts are of unknown provenance. Many work only partially and some not at all. Their practical functions remain a mystery.

But computer-aided design software can help scholars reconstruct the artifacts’ internal mechanisms. This, in turn, helps us understand how the objects once moved.

Even more exciting, 3D printing lets scholars create physical models. Rather than imagining how a Renaissance prosthesis worked, scholars can physically test one. It’s a form of investigation that opens new possibilities for exploring the development of prosthetic technology and user experience through the centuries. It creates a trail of breadcrumbs that can bring us closer to the everyday experiences of premodern amputees.

But what does this work, which brings together two very different fields, look like in action?

What follows is a glimpse into our experience of collaboration on a team of historians and engineers, told through the story of one week. Working together, we shared a model of a 16th-century prosthesis with the public and learned a lesson about humans and technology in the process.

A historian encounters a broken model

THE HISTORIAN: On a cloudy day in late March, I walked into the University of Alabama Birmingham’s Center for Teaching and Learning holding a weatherproof case and brimming with excitement. Nestled within the case’s foam inserts was a functioning 3D-printed model of a 500-year-old prosthetic hand.

Fifteen minutes later, it broke.

Mechanical hand with plastic orange fingers extending from a plastic gray palm and wrist
This 3D-printed model of a 16th-century hand prosthesis has working mechanisms.
Heidi Hausse, CC BY-SA

For two years, my team of historians and engineers at Auburn University had worked tirelessly to turn an idea – recreating the mechanisms of a 16th-century artifact from Germany – into reality. The original iron prosthesis, the Kassel Hand, is one of approximately 35 from Renaissance Europe known today.

As an early modern historian who studies these artifacts, I work with a mechanical engineer, Chad Rose, to find new ways to explore them. The Kassel Hand is our case study. Our goal is to learn more about the life of the unknown person who used this artifact 500 years ago.

Using 3D-printed models, we’ve run experiments to test what kinds of activities its user could have performed with it. We modeled in inexpensive polylactic acid – plastic – to make this fragile artifact accessible to anyone with a consumer-grade 3D printer. But before sharing our files with the public, we needed to see how the model fared when others handled it.

An invitation to guest lecture on our experiments in Birmingham was our opportunity to do just that.

We brought two models. The main release lever broke first in one and then the other. This lever has an interior triangular plate connected to a thin rod that juts out of the wrist like a trigger. After pressing the fingers into a locked position, pulling the trigger is the only way to free them. If it breaks, the fingers become stuck.

Close-up of the interior mechanism of a 3D-printed prosthetic, the broken lever raised straight up
The thin rod of the main release lever snapped in this model.
Heidi Hausse, CC BY-SA

I was baffled. During testing, the model had lifted a 20-pound simulation of a chest lid by its fingertips. Yet, the first time we shared it with a general audience, a mechanism that had never broken in testing simply snapped.

Was it a printing error? Material defect? Design flaw?

We consulted our Hand Whisperer: our lead student engineer whose feel for how the model works appears at times preternatural.

An engineer becomes a hand whisperer

THE ENGINEER: I was sitting at my desk in Auburn’s mechanical engineering 3D print lab when I heard the news.

As a mechanical engineering graduate student concentrating on additive manufacturing, commonly known as 3D printing, I explore how to use this technology to reconstruct historical mechanisms. Over the two years I’ve worked on this project, I’ve come to know the Kassel Hand model well. As we fine-tuned designs, I’ve created and edited its computer-aided design files – the digital 3D constructions of the model – and printed and assembled its parts countless times.

Computer illustration of open hand model
This view of the computer-aided design file of a strengthened version of the model, which includes ribs and fillets to reinforce the plastic material, highlights the main release lever in orange.
Peden Jones, CC BY-SA

Examining parts midassembly is a crucial checkpoint for our prototypes. This quality control catches, corrects and prevents any defects, such as misprinted or damaged parts. It’s crucial for creating consistent and repeatable experiments. A new model version or component change never leaves the lab without passing rigorous inspection. This process means there are ways this model has behaved over time that the rest of the team has never seen. But I have.

So when I heard the release lever had broken in Birmingham, it was just another Thursday. While it had never snapped when we tested the model on people, I’d seen it break plenty of times while performing checks on components.

Disassembled hand model
Our model reconstructs the Kassel Hand’s original metal mechanisms in plastic.
Heidi Hausse, CC BY-SA

After all, the model is made from relatively weak polylactic acid. Perhaps the most difficult part of our work is making a plastic model as durable as possible while keeping it visually consistent with the 500-year-old original. The iron rod of the artifact’s lever can handle more force than our plastic version, at least five times the yield strength.

I suspected the lever had snapped because people pulled the trigger too far back and too quickly. The challenge, then, was to prevent this. But redesigning the lever to be thicker or a different shape would make it less like the historical artifact.

This raised the question: Why could I use the model without breaking the lever, but no one else could?

The team makes a plan

THE TEAM: A flurry of discussion led to growing consensus – the crux of the issue was not the model, it was the user.

The original Kassel Hand’s wearer would have learned to use their prosthesis through practice. Likewise, our team had learned to use the model over time. Through the process of design and development, prototyping and printing, we were inadvertently practicing how to operate it.

We needed to teach others to do the same. And this called for a two-pronged approach.

Perspective on using the Kassel Hand, as a modern prosthetist.

The engineers reexamined the opening through which the release trigger poked out of the model. They proposed shortening it to limit how far back users could pull it. When we checked how this change would affect the model’s accuracy, we found that a smaller opening was actually closer to the artifact’s dimensions. While the larger opening had been necessary for an earlier version of the release lever that needed to travel farther, now it only caused problems. The engineers got to work.

The historians, meanwhile, created plans to document and share the various techniques to operating the model the team hadn’t realized it had honed. To teach someone at home how to operate their own copy, we filmed a short video explaining how to lock and release the fingers and troubleshoot when a finger sticks.

Testing the plan

Exactly one week after what we called “the Birmingham Break,” we shared the model with a general audience again. This time we visited a colleague’s history class at Auburn.

We brought four copies. Each had an insert to shorten the opening around the trigger. First, we played our new instructional video on a projector. Then we turned the models over to the students to try.

Four mechanical hand models on display, each slightly different in design
The team brought these four models with inserts to shorten the opening below the release trigger to test with a general audience of undergraduate and graduate students.
Heidi Hausse, CC BY-SA

The result? Not a single broken lever. We publicly launched the project on schedule.

The process of introducing the Kassel Hand model to the public highlights that just as the 16th-century amputee who wore the artifact had to learn to use it, one must learn to use the 3D-printed model, too.

It is a potent reminder that technology is not just a matter of mechanisms and design. It is fundamentally about people – and how people use it.

The Conversation

Heidi Hausse received funding from the Herzog August Bibliothek; the Consortium for History of Science, Technology and Medicine; the American Council of Learned Societies; the Huntington Library; the Society of Fellows in the Humanities at Columbia University; and the Renaissance Society of America.

Peden Jones received funding from Renaissance Society of America.

ref. 3D-printed model of a 500-year-old prosthetic hand hints at life of a Renaissance amputee – https://theconversation.com/3d-printed-model-of-a-500-year-old-prosthetic-hand-hints-at-life-of-a-renaissance-amputee-256670

To spur the construction of affordable, resilient homes, the future is concrete

Source: – By Pablo Moyano Fernández, Assistant Professor of Architecture, Washington University in St. Louis

A modular, precast system of concrete ‘rings’ can be connected in different ways to build a range of models of energy-efficient homes. Pablo Moyano Fernández, CC BY-SA

Wood is, by far, the most common material used in the U.S. for single-family home construction.

But wood construction isn’t engineered for long-term durability, and it often underperforms, particularly in the face of increasingly common extreme weather events.

In response to these challenges, I believe mass-produced concrete homes can offer affordable, resilient housing in the U.S. By leveraging the latest innovations of the precast concrete industry, this type of homebuilding can meet the needs of a changing world.

Wood’s rise to power

Over 90% of the new homes built in the U.S. rely on wood framing.

Wood has deep historical roots as a building material in the U.S., dating back to the earliest European settlers who constructed shelters using the abundant native timber. One of the most recognizable typologies was the log cabin, built from large tree trunks notched at the corners for structural stability.

A mother holds her child in the front doorway of their log cabin home.
Log cabins were popular in the U.S. during the 18th and 19th centuries.
Heritage Art/Heritage Images via Getty Images

In the 1830s, wood construction underwent a significant shift with the introduction of balloon framing. This system used standardized, sawed lumber and mass-produced nails, allowing much smaller wood components to replace the earlier heavy timber frames. It could be assembled by unskilled labor using simple tools, making it both accessible and economical.

In the early 20th century, balloon framing evolved into platform framing, which became the dominant method. By using shorter lumber lengths, platform framing allowed each floor to be built as a separate working platform, simplifying construction and improving its efficiency.

The proliferation and evolution of wood construction helped shape the architectural and cultural identity of the nation. For centuries, wood-framed houses have defined the American idea of home – so much so that, even today, when Americans imagine a house, they typically envision one built of wood.

A row of half-constructed homes surrounded by piles of dirt.
A suburban housing development from the 1950s being built with platform framing.
H. Armstrong Roberts/ClassicStock via Getty Images

Today, light-frame wood construction dominates the U.S. residential market.

Wood is relatively affordable and readily available, offering a cost-effective solution for homebuilding. Contractors are familiar with wood construction techniques. In addition, building codes and regulations have long been tailored to wood-frame systems, further reinforcing their prevalence in the housing industry.

Despite its advantages, wood light-frame construction presents several important limitations. Wood is vulnerable to fire. And in hurricane- and tornado-prone regions, wood-framed homes can be damaged or destroyed.

Wood is also highly susceptible to water-related issues, such as swelling, warping and structural deterioration caused by leaks or flooding. Vulnerability to termites, mold, rot and mildew further compromise the longevity and safety of wood-framed structures, especially in humid or poorly ventilated environments.

The case for concrete

Meanwhile, concrete has revolutionized architecture and engineering over the past century. In my academic work, I’ve studied, written and taught about the material’s many advantages.

The material offers unmatched strength and durability, while also allowing design flexibility and versatility. It’s low-cost and low-maintenance, and it has high thermal mass properties, which refers to the material’s ability to absorb and store heat during the day, and slowly release it during the cooler nights. This can lower heating and cooling costs.

Properly designed concrete enclosures offer exceptional performance against a wide range of hazards. Concrete can withstand fire, flooding, mold, insect infestation, earthquakes, hail, hurricanes and tornadoes.

It’s commonly used for home construction in many parts of the world, such as Europe, Japan, Mexico, Brazil and Argentina, as well as India and other parts of Southeast Asia.

However, despite their multiple benefits, concrete single-family homes are rare in the U.S.

That’s because most concrete structures are built using a process called cast-in-place. In this technique, the concrete is formed and poured directly at the construction site. The method relies on built-in-place molds. After the concrete is cast and cured over several days, the formwork is removed.

This process is labor-intensive and time-consuming, and it often produces considerable waste. This is particularly an issue in the U.S., where labor is more expensive than in other parts of the world. The material and labor cost can be as high as 35% to 60% of the total construction cost.

Portland cement, the binding agent in concrete, requires significant energy to produce, resulting in considerable carbon dioxide emissions. However, this environmental cost is often offset by concrete’s durability and long service life.

Concrete’s design flexibility and structural integrity make it particularly effective for large-scale structures. So in the U.S., you’ll see it used for large commercial buildings, skyscrapers and most highways, bridges, dams and other critical infrastructure projects.

But when it comes to single-family homes, cast-in-place concrete poses challenges to contractors. There are the higher initial construction costs, along with a lack of subcontractor expertise. For these reasons, most builders and contractors stick with what they know: the wood frame.

A new model for home construction

Precast concrete, however, offers a promising alternative.

Unlike cast-in-place concrete, precast systems allow for off-site manufacturing under controlled conditions. This improves the quality of the structure, while also reducing waste and labor.

The CRETE House, a prototype I worked on in 2017 alongside a team at Washington University in St. Louis, showed the advantages of a precast home construction.

To build the precast concrete home, we used ultra-high-performance concrete, one of the latest advances in the concrete industry. Compared with conventional concrete, it’s about six times stronger, virtually impermeable and more resistant to freeze-thaw cycles. Ultra-high-performance concrete can last several hundred years.

The strength of the CRETE House was tested by shooting a piece of wood at 120 mph (193 kph) to simulate flying debris from an F5 tornado. It was unable to breach the wall, which was only 2 inches (5.1 centimeters) thick.

The wall of the CRETE House was able to withstand a piece of wood fired at 120 mph (193 kph).

Building on the success of the CRETE House, I designed the Compact House as a solution for affordable, resilient housing. The house consists of a modular, precast concrete system of “rings” that can be connected to form the entire structure – floors, walls and roofs – creating airtight, energy-efficient homes. A series of different rings can be chosen from a catalog to deliver different models that can range in size from 270 to 990 square feet (25 to 84 square meters).

The precast rings can be transported on flatbed trailers and assembled into a unit in a single day, drastically reducing on-site labor, time and cost.

Since they’re built using durable concrete forms, the house can be easily mass-produced. When precast concrete homes are mass-produced, the cost can be competitive with traditional wood-framed homes. Furthermore, the homes are designed to last far beyond 100 years – much longer than typical wood structures – while significantly lowering utility bills, maintenance expenses and insurance premiums.

The project is also envisioned as an open-source design. This means that the molds – which are expensive – are available for any precast producer to use and modify.

A computer graphic showing a prototype of a small, concrete home.
The Compact House is made using ultra-high-performance concrete.
Pablo Moyano Fernández, CC BY-SA

Leveraging a network that’s already in place

Two key limitations of precast concrete construction are the size and weight of the components and the distance to the project site.

Precast elements must comply with standard transportation regulations, which impose restrictions on both size and weight in order to pass under bridges and prevent road damage. As a result, components are typically limited to dimensions that can be safely and legally transported by truck. Each of the Compact House’s pieces are small enough to be transported in standard trailers.

Additionally, transportation costs become a major factor beyond a certain range. In general, the practical delivery radius from a precast plant to a construction site is 500 miles (805 kilometers). Anything beyond that becomes economically unfeasible.

However, the infrastructure to build precast concrete homes is already largely in place. Since precast concrete is often used for office buildings, schools, parking complexes and large apartments buildings, there’s already an extensive national network of manufacturing plants capable of producing and delivering components within that 500-mile radius.

There are other approaches to build homes with concrete: Homes can use concrete masonry units, which are similar to cinder blocks. This is a common technique around the world. Insulated concrete forms involve rigid foam blocks that are stacked like Lego bricks and are then filled with poured concrete, creating a structure with built-in insulation. And there’s even 3D-printed concrete, a rapidly evolving technology that is in its early stages of development.

However, none of these use precast concrete modules – the rings in my prototypes – and therefore require substantially longer on-site time and labor.

To me, precast concrete homes offer a compelling vision for the future of affordable housing. They signal a generational shift away from short-term construction and toward long-term value – redefining what it means to build for resilience, efficiency and equity in housing.

A bird's-eye view of a computer-generated neighborhood featuring plots of land with multiple concrete homes located on them.
An image of North St. Louis, taken from Google Earth, showing how vacant land can be repurposed using precast concrete homes.
Pablo Moyano Fernández, CC BY-SA

This article is part of a series centered on envisioning ways to deal with the housing crisis.

The Conversation

Pablo Moyano Fernández does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. To spur the construction of affordable, resilient homes, the future is concrete – https://theconversation.com/to-spur-the-construction-of-affordable-resilient-homes-the-future-is-concrete-254561

Supreme Court rules that states may deny people covered by Medicaid the freedom to choose Planned Parenthood as their health care provider

Source: – By Naomi Cahn, Professor of Law, University of Virginia

Abortion-rights demonstrators holds a sign in front of the Supreme Court building in Washington as the Medina v. Planned Parenthood South Atlantic case is heard on April 2, 2025. Tom Williams/CQ-Roll Call via Getty Images

Having the freedom to choose your own health care provider is something many Americans take for granted. But the U.S. Supreme Court’s conservative supermajority ruled on June 25, 2025, in a 6-3 decision that people who rely on Medicaid for their health insurance don’t have that right.

The case, Medina v. Planned Parenthood South Atlantic, is focused on a technical legal issue: whether people covered by Medicaid have the right to sue state officials for preventing them from choosing their health care provider. In his majority opinion, Justice Neil Gorsuch wrote that they don’t because the Medicaid statute did not “clearly and unambiguously” give individuals that right.

As law professors who teach courses about health and poverty law as well as reproductive justice, we think this ruling could restrict access to health care for the more than 78 million Americans who get their health insurance coverage through the Medicaid program.

Excluding Planned Parenthood

The case started with a predicament for South Carolina resident Julie Edwards, who is enrolled in Medicaid. After Edwards struggled to get contraceptive services, she was able to receive care from a Planned Parenthood South Atlantic clinic in Columbia, South Carolina.

Planned Parenthood, an array of nonprofits with roots that date back more than a century, is among the nation’s top providers of reproductive services. It operates two clinics in South Carolina, where patients can get physical exams, cancer screenings, contraception and other services. It also provides same-day appointments and keeps long hours.

In July 2018, however, South Carolina Gov. Henry McMaster issued an executive order that barred Medicaid reimbursement for health care providers in the state that offer abortion care.

That meant Planned Parenthood, a longtime target of conservatives’ ire, would no longer be reimbursed for any type of care for Medicaid patients, preventing Edwards from transferring all her gynecological care to that office as she had hoped to do.

Planned Parenthood and Edwards sued South Carolina. They argued that the state was violating the federal Medicare and Medicaid Act, which Congress passed in 1965, by not letting Edwards obtain care from the provider of her choice.

A ‘free-choice-of-provider’ requirement

Medicaid, which mainly covers low-income people, their children and people with disabilities, operates as a partnership between the federal government and the states. Congress passed the law that led to its creation based on its power under the Constitution’s spending clause, which allows Congress to subject federal funds to certain requirements.

Two years later, due to concerns that states were restricting which providers Medicaid recipients could choose, Congress added a “free-choice-of-provider” requirement to the program. It states that people enrolled in Medicaid “may obtain such assistance from any institution, agency, community pharmacy, or person, qualified to perform the service or services required.”

While the Medicaid statute does not, by itself, allow people enrolled in that program to enforce this free-choice clause, the question at the core of this case was whether another federal statute, known as Section 1983, did give them a right to sue.

The Supreme Court has long recognized that Section 1983 protects an individual’s ability to sue when their rights under a federal statute have been violated. In fact, in 2023, it found such a right under the Medicaid Nursing Home Reform Act. The court held that Section 1983 confers the right to sue when a statute’s provisions “unambiguously confer individual federal rights.”

In Medina, however, the court found that there was no right to sue. Instead, the court emphasized that “the typical remedy” is for the federal government to cut off Medicaid funds to a state if a state is not complying with the Medicaid statute.

The ruling overturned lower-court decisions in favor of Edwards. It also expressly rejected the Supreme Court’s earlier rulings, which the majority criticized as taking a more “expansive view of its power to imply private causes of action to enforce federal laws.”

Planned Parenthood signage is displayed outside a health care clinic.
Planned Parenthood clinics, like this one in Los Angeles, are located across the United States.
Patrick T. Fallon/AFP via Getty Images

Restricting Medicaid funds

This dispute is just one chapter in the long fight over access to abortion in the U.S. In addition to the question of whether it should be legal, proponents and opponents of abortion rights have battled over whether the government should pay for it – even if that funding happens indirectly.

Through a federal law known as the Hyde Amendment, Medicaid cannot reimburse health care providers for the cost of abortions, with a few exceptions: when a patient’s life is at risk, or her pregnancy is due to rape or incest. Some states do cover abortion when their laws allow it, without using any federal funds.

As a result, Planned Parenthood rarely gets any federal Medicaid funds for abortions.

McMaster explained that he removed “abortion clinics,” including Planned Parenthood, from the South Carolina Medicaid program because he didn’t want state funds to indirectly subsidize abortions.

After the Supreme Court ruled on this case, McMaster said he had taken “a stand to protect the sanctity of life and defend South Carolina’s authority and values – and today, we are finally victorious.”

But only about 4% of Planned Parenthood’s services nationwide were related to abortion, as of 2022. Its most common service is testing for sexually transmitted diseases. Across the nation, Planned Parenthood provides health care to more than 2 million patients per year, most of whom have low incomes.

Man in suit speaks into a microphone, flanked by other people who are standing in front of a building surrounded by scaffolding.
South Carolina Gov. Henry McMaster stands outside the Supreme Court building in Washington in April 2025 and speaks about this case.
Kayla Bartkowski/Getty Images

Consequences beyond South Carolina

This ruling’s consequences are not limited to Medicaid access in South Carolina.

It may make it harder for individuals to use Section 1983 to bring claims under any federal statute. As Justice Ketanji Brown Jackson, joined by Justices Sonia Sotomayor and Elena Kagan, wrote in her dissent, the court “continues the project of stymying one of the country’s great civil rights laws.”

Enacted in 1871, the civil rights law has been invoked to challenge violations of rights by state officials against individuals. Jackson wrote that the court now limits the ability to use Section 1983 to vindicate personal rights only if the statutes use the correct “magic words.”

The dissent also criticized the majority decision as likely “to result in tangible harm to real people.” Not only will it potentially deprive “Medicaid recipients in South Carolina of their only meaningful way of enforcing a right that Congress has expressly granted to them,” Jackson wrote, but it could also “strip those South Carolinians – and countless other Medicaid recipients around the country – of a deeply personal freedom: the ‘ability to decide who treats us at our most vulnerable.’”

The decision could also have far-reaching consequences. Arkansas, Missouri and Texas have already barred Planned Parenthood from getting reimbursed by Medicaid for any kind of health care. More states could follow suit.

In addition, given Planned Parenthood’s role in providing contraceptive care, disqualifying it from Medicaid could restrict access to health care and increase the already-high unintended pregnancy rate in America.

States could also try to exclude providers based on other characteristics, such as whether their employees belong to unions or if they provide their patients with gender-affirming care, further restricting patients’ choices.

With this ruling, the court is allowing a patchwork of state exclusions of Planned Parenthood and other medical providers from the Medicaid program that could soon resemble the patchwork already seen with abortion access.

Portions of this article first appeared in another article published on April 2, 2025.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Supreme Court rules that states may deny people covered by Medicaid the freedom to choose Planned Parenthood as their health care provider – https://theconversation.com/supreme-court-rules-that-states-may-deny-people-covered-by-medicaid-the-freedom-to-choose-planned-parenthood-as-their-health-care-provider-259953

Toxic algae blooms are lasting longer in Lake Erie − why that’s a worry for people and pets

Source: – By Gregory J. Dick, Professor of Biology, University of Michigan

A satellite image from Aug. 13, 2024, shows an algal bloom covering approximately 320 square miles (830 square km) of Lake Erie. By Aug. 22, it had nearly doubled in size. NASA Earth Observatory

Federal scientists released their annual forecast for Lake Erie’s harmful algal blooms on June 26, 2025, and they expect a mild to moderate season. However, anyone who comes in contact with the blooms can face health risks, and it’s worth remembering that 2014, when toxins from algae blooms contaminated the water supply in Toledo, Ohio, was considered a moderate year, too.

We asked Gregory J. Dick, who leads the Cooperative Institute for Great Lakes Research, a federally funded center at the University of Michigan that studies harmful algal blooms among other Great Lakes issues, why they’re such a concern.

A bar chart shows 2025's forecast to be more severe than 2023 but less than 2024.
The National Oceanic and Atmospheric Administration’s prediction for harmful algal bloom severity in Lake Erie compared with past years.
NOAA

1. What causes harmful algal blooms?

Harmful algal blooms are dense patches of excessive algae growth that can occur in any type of water body, including ponds, reservoirs, rivers, lakes and oceans. When you see them in freshwater, you’re typically seeing cyanobacteria, also known as blue-green algae.

These photosynthetic bacteria have inhabited our planet for billions of years. In fact, they were responsible for oxygenating Earth’s atmosphere, which enabled plant and animal life as we know it.

An illustration of algae bloom sources shows a farm field, city and large body of water.
The leading source of harmful algal blooms today is nutrient runoff from fertilized farm fields.
Michigan Sea Grant

Algae are natural components of ecosystems, but they cause trouble when they proliferate to high densities, creating what we call blooms.

Harmful algal blooms form scums at the water surface and produce toxins that can harm ecosystems, water quality and human health. They have been reported in all 50 U.S. states, all five Great Lakes and nearly every country around the world. Blue-green algae blooms are becoming more common in inland waters.

The main sources of harmful algal blooms are excess nutrients in the water, typically phosphorus and nitrogen.

Historically, these excess nutrients mainly came from sewage and phosphorus-based detergents used in laundry machines and dishwashers that ended up in waterways. U.S. environmental laws in the early 1970s addressed this by requiring sewage treatment and banning phosphorus detergents, with spectacular success.

How pollution affected Lake Erie in the 1960s, before clean water regulations.

Today, agriculture is the main source of excess nutrients from chemical fertilizer or manure applied to farm fields to grow crops. Rainstorms wash these nutrients into streams and rivers that deliver them to lakes and coastal areas, where they fertilize algal blooms. In the U.S., most of these nutrients come from industrial-scale corn production, which is largely used as animal feed or to produce ethanol for gasoline.

Climate change also exacerbates the problem in two ways. First, cyanobacteria grow faster at higher temperatures. Second, climate-driven increases in precipitation, especially large storms, cause more nutrient runoff that has led to record-setting blooms.

2. What does your team’s DNA testing tell us about Lake Erie’s harmful algal blooms?

Harmful algal blooms contain a mixture of cyanobacterial species that can produce an array of different toxins, many of which are still being discovered.

When my colleagues and I recently sequenced DNA from Lake Erie water, we found new types of microcystins, the notorious toxins that were responsible for contaminating Toledo’s drinking water supply in 2014.

These novel molecules cannot be detected with traditional methods and show some signs of causing toxicity, though further studies are needed to confirm their human health effects.

A young woman and dog walk along a shoreline with blue-green algae in the water.
Blue-green algae blooms in freshwater, like this one near Toledo in 2014, can be harmful to humans, causing gastrointestinal symptoms, headache, fever and skin irritation. They can be lethal for pets.
Ty Wright for The Washington Post via Getty Images

We also found organisms responsible for producing saxitoxin, a potent neurotoxin that is well known for causing paralytic shellfish poisoning on the Pacific Coast of North America and elsewhere.

Saxitoxins have been detected at low concentrations in the Great Lakes for some time, but the recent discovery of hot spots of genes that make the toxin makes them an emerging concern.

Our research suggests warmer water temperatures could boost its production, which raises concerns that saxitoxin will become more prevalent with climate change. However, the controls on toxin production are complex, and more research is needed to test this hypothesis. Federal monitoring programs are essential for tracking and understanding emerging threats.

3. Should people worry about these blooms?

Harmful algal blooms are unsightly and smelly, making them a concern for recreation, property values and businesses. They can disrupt food webs and harm aquatic life, though a recent study suggested that their effects on the Lake Erie food web so far are not substantial.

But the biggest impact is from the toxins these algae produce that are harmful to humans and lethal to pets.

The toxins can cause acute health problems such as gastrointestinal symptoms, headache, fever and skin irritation. Dogs can die from ingesting lake water with harmful algal blooms. Emerging science suggests that long-term exposure to harmful algal blooms, for example over months or years, can cause or exacerbate chronic respiratory, cardiovascular and gastrointestinal problems and may be linked to liver cancers, kidney disease and neurological issues.

A large round structure offshore is surrounded by blue-green algae.
The water intake system for the city of Toledo, Ohio, is surrounded by an algae bloom in 2014. Toxic algae got into the water system, resulting in residents being warned not to touch or drink their tap water for three days.
AP Photo/Haraz N. Ghanbari

In addition to exposure through direct ingestion or skin contact, recent research also indicates that inhaling toxins that get into the air may harm health, raising concerns for coastal residents and boaters, but more research is needed to understand the risks.

The Toledo drinking water crisis of 2014 illustrated the vast potential for algal blooms to cause harm in the Great Lakes. Toxins infiltrated the drinking water system and were detected in processed municipal water, resulting in a three-day “do not drink” advisory. The episode affected residents, hospitals and businesses, and it ultimately cost the city an estimated US$65 million.

4. Blooms seem to be starting earlier in the year and lasting longer – why is that happening?

Warmer waters are extending the duration of the blooms.

In 2025, NOAA detected these toxins in Lake Erie on April 28, earlier than ever before. The 2022 bloom in Lake Erie persisted into November, which is rare if not unprecedented.

Scientific studies of western Lake Erie show that the potential cyanobacterial growth rate has increased by up to 30% and the length of the bloom season has expanded by up to a month from 1995 to 2022, especially in warmer, shallow waters. These results are consistent with our understanding of cyanobacterial physiology: Blooms like it hot – cyanobacteria grow faster at higher temperatures.

5. What can be done to reduce the likelihood of algal blooms in the future?

The best and perhaps only hope of reducing the size and occurrence of harmful algal blooms is to reduce the amount of nutrients reaching the Great Lakes.

In Lake Erie, where nutrients come primarily from agriculture, that means improving agricultural practices and restoring wetlands to reduce the amount of nutrients flowing off of farm fields and into the lake. Early indications suggest that Ohio’s H2Ohio program, which works with farmers to reduce runoff, is making some gains in this regard, but future funding for H2Ohio is uncertain.

In places like Lake Superior, where harmful algal blooms appear to be driven by climate change, the solution likely requires halting and reversing the rapid human-driven increase in greenhouse gases in the atmosphere.

The Conversation

Gregory J. Dick receives funding for harmful algal bloom research from the National Oceanic and Atmospheric Administration, the National Science Foundation, the United States Geological Survey, and the National Institutes for Health. He serves on the Science Advisory Council for the Environmental Law and Policy Center.

ref. Toxic algae blooms are lasting longer in Lake Erie − why that’s a worry for people and pets – https://theconversation.com/toxic-algae-blooms-are-lasting-longer-in-lake-erie-why-thats-a-worry-for-people-and-pets-259954