Blocking exports and raising tariffs is a bad defense against industrial cyber espionage, study shows

Source: – By William Akoto, Assistant Professor of Global Security, American University

Cutting off China’s access to advanced U.S. chips is likely to motivate Chinese cyber espionage. kritsapong jieantaratip/iStock via Getty Images

The United States is trying to decouple its economy from rivals like China. Efforts toward this include policymakers raising tariffs on Chinese goods, blocking exports of advanced technology and offering subsidies to boost American manufacturing. The goal is to reduce reliance on China for critical products in hopes that this will also protect U.S. intellectual property from theft.

The idea that decoupling will help stem state-sponsored cyber-economic espionage has become a key justification for these measures. For instance, then-U.S. Trade Representative Katherine Tai framed the continuation of China-specific tariffs as serving the “statutory goal to stop [China’s] harmful … cyber intrusions and cyber theft.” Early tariff rounds during the first Trump administration were likewise framed as forcing Beijing to confront “deeply entrenched” theft of U.S. intellectual property.

This push to “onshore” key industries is driven by very real concerns. By some estimates, theft of U.S. trade secrets, often through hacking – costs the American economy hundreds of billions of dollars per year. In that light, decoupling is a defensive economic shield – a way to keep vital technology out of an adversary’s reach.

But will decoupling and cutting trade ties truly make America’s innovations safer from prying eyes? I’m a political scientist who studies state-sponsored cyber espionage, and my research suggests that the answer is a definitive no. Indeed, it might actually have the opposite effect.

To understand why, it helps to look at what really drives state-sponsored hacking.

Rivalry, not reliance

Intuitively, you might think a country is most tempted to steal secrets from a nation it depends on. For example, if Country A must import jet engines or microchips from Country B, Country A might try to hack Country B’s companies to copy that technology and become self-sufficient. This is the industrial dependence theory of cyber theft.

There is some truth to this motive. If your economy needs what another country produces, stealing that know-how can boost your own industries and reduce reliance. However, in a recent study, I show that a more powerful predictor of cyber espionage is industrial similarity. Countries with overlapping advanced industries such as aerospace, electronics or pharmaceuticals are the ones most likely to target each other with cyberattacks.

Why would having similar industries spur more spying? The reason is competition. If two nations both specialize in cutting-edge sectors, each has a lot to gain by stealing the other’s innovations.

If you’re a tech powerhouse, you have valuable secrets worth stealing, and you have the capability and motivation to steal others’ secrets. In essence, simply trading with a rival isn’t the core issue. Rather, it’s the underlying technological rivalry that fuels espionage.

For example, a cyberattack in 2012 targeted SolarWorld, a U.S. solar panel manufacturer, and the perpetrators stole the company’s trade secrets. Chinese solar companies then developed competing products based on the stolen designs, costing SolarWorld millions in lost revenue. This is a classic example of industrial similarity at work. China was building its own solar industry, so it hacked a U.S. rival to leapfrog in technology.

China has made major investments in its cyber-espionage capabilities.

Boosting trade barriers can fan the flames

Crucially, cutting trade ties doesn’t remove this rivalry. If anything, decoupling might intensify it. When the U.S. and China exchange tariff blows or cut off tech transfers, it doesn’t make China give up – it likely pushes Chinese intelligence agencies to work even harder to steal what they can’t buy.

This dynamic isn’t unique to China. Any country that suddenly loses access to an important technology may turn to espionage as Plan B.

History provides examples. When South Africa was isolated by sanctions in the 1980s, it covertly obtained nuclear weapons technology. Similarly, when Israel faced arms embargoes in the 1960s, it engaged in clandestine efforts to get military technology. Isolation can breed desperation, and hacking is a low-cost, high-reward tool for the desperate.

If decoupling won’t end cyber espionage, what will?

There’s no easy fix for state-sponsored hacking as long as countries remain locked in high-tech competition. However, there are steps that can mitigate the damage and perhaps dial down the frequency of these attacks.

One is investing in cyber defense. Just as a homeowner adds locks and alarms after a burglary, companies and governments should continually strengthen their cyber defenses. Assuming that espionage attempts are likely to happen is key. Advanced network monitoring, employee training against phishing, and robust encryption can make it much harder for hackers to succeed, even if they keep trying.

Another is building resilience and redundancy. If you know that some secrets might get stolen, plan for it. Businesses can shorten product development cycles and innovate faster so that even if a rival copies today’s tech, you’re already moving on to the next generation. Staying ahead of thieves is a form of defense, too.

Ultimately, rather than viewing tariffs and export bans as silver bullets against espionage, U.S. leaders and industry might be safer focusing on resilience and stress-testing cybersecurity firms. Make it harder for adversaries to steal secrets, and less rewarding even if they do.

The Conversation

William Akoto does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Blocking exports and raising tariffs is a bad defense against industrial cyber espionage, study shows – https://theconversation.com/blocking-exports-and-raising-tariffs-is-a-bad-defense-against-industrial-cyber-espionage-study-shows-258243

What is reconciliation − the legislative shortcut Republicans are using to push through their ‘big, beautiful bill’?

Source: – By Linda J. Bilmes, Daniel Patrick Moynihan Senior Lecturer in Public Policy and Public Finance, Harvard Kennedy School

Senate Majority Leader John Thune speaks with reporters about the reconciliation process to advance President Donald Trump’s spending and tax bill on June 3, 2025. AP Photo/J. Scott Applewhite

The word “reconciliation” sounds benign, even harmonious.

But in Washington, D.C., reconciliation refers to a potent legislative shortcut that allows the party in power to avoid opposition and enact sweeping changes to taxes and spending with a simple majority vote. Democrats used the process to pass the Inflation Reduction Act in 2022. Reconciliation helped Republicans pass large tax cuts in 2017.

Reconciliation is also at the heart of the current budget debate, as Senate Republicans rush to advance their version of the “One Big Beautiful Bill Act,” also known by its acronym OBBBA, which passed the House in May 2025.

I served as assistant secretary of Commerce for management and budget during the Clinton administration, when my colleagues and I helped forge bipartisan legislation that balanced the federal budget and produced surpluses over four years, from 1998 to 2001. We were even able to pay off some debt.

But since 2001, the country’s fiscal situation has deteriorated significantly. And the reconciliation process has strayed from its original purpose as a mechanism to promote sound fiscal policy. Instead, it is now used to pass partisan legislation, often without regard to its economic impact on future generations of Americans.

Reconciliation 101

The reconciliation process was created by the Congressional Budget Act of 1974, which was overwhelmingly supported by both parties. It was designed to align policy goals with budget targets to help rein in deficits.

The rules specify that a bill using the reconciliation process must pertain directly to budgetary or fiscal matters, cannot change Social Security, Medicare or the budget process itself, or deliberately extend deficits beyond a 10-year window. As part of the process, the parliamentarian goes through each element of the bill and determines whether it meets the requirements, removing any that don’t.

This caused the One Big Beautiful Bill Act to hit a snag in the Senate on June 25, 2025, after the parliamentarian ruled several major parts of it couldn’t be included as written, such as an effort to crack down on efforts by states to get more Medicaid funds and a limit on student debt repayment options.

In the Senate, reconciliation has special procedural advantages. Debate is limited to 20 hours. Conveniently for the party in power, the final bill can pass with a simple majority of 51 votes. This avoids the usual 60-vote threshold needed to overcome a filibuster.

Over its 50-year history, 23 reconciliation bills have become law.

Reconciliation on rise as budget process breaks down

Over time, reconciliation has become the dominant method for enacting major tax and spending legislation, as the regular congressional budget process has broken down.

Since 1974, there have been multiple government shutdowns, near-shutdowns and short-term, stopgap “continual resolutions” instead of annual budgets, accompanied by rising deficits and national debt.

With few other tools at its disposal, Congress has used reconciliation to push through many pieces of major economic legislation, including the 2001 and 2003 tax cuts under President George W. Bush, the 2017 tax cuts during President Donald Trump’s first term, and the American Rescue Plan in 2021 and the Inflation Reduction Act in 2022 during the Biden administration.

However, reconciliation has significant flaws. Because debate is limited, senators often vote on bills over 1,000 pages long with little time to review the details. And once tax cuts are enacted under reconciliation, it is devilishly hard to get rid of them.

Given the compressed timelines and lack of transparency inherent in such huge, messy spending bills, it is fairly easy for lawmakers to slip in earmarks, tax loopholes and other extraneous items that that don’t get removed by the parliamentarian.

a Black man points the ceiling as he stands in front of a lectern and two poster boards
House Minority Leader Hakeem Jeffries argues Republicans’ spending and tax bill will ‘explode the deficit.’
AP Photo/J. Scott Applewhite

What’s in the bill?

At the heart of the One Big Beautiful Bill Act, passed by the House, is an extension of President Trump’s tax cuts from his first term, which would otherwise expire at the end of 2025, according to the procedural rules for reconciliation.

But it also includes multiple new tax cuts – such as an end to taxes on overtime and tips and lower estate taxes – introduces new Medicaid work requirements and repeals various energy credits. In line with the Trump administration’s policies, the bill slashes federal funding for education, Medicaid, public housing, environmental programs, scientific research and some national park and public land protection programs. It also boosts defense spending.

The bill would sharply worsen the nation’s fiscal outlook, according to analyses by the nonpartisan Congressional Budget Office and other organizations.

Currently, the national debt exceeds US$36 trillion, according to the U.S. Treasury, and net interest payments account for some 16% of federal revenue, based on the Congressional Budget Office’s projections for 2025.

In its analysis, the Congressional Budget Office – which was also created by the 1974 act – said the House-passed version would increase deficits by more than $3.1 trillion over the next decade. The overwhelming share of this cost comes from the permanent extension of individual tax cuts initially enacted in 2017.

According to the Congressional Budget Office’s analysis, by 2035 households earning at least $1 million would receive an average annual tax cut of about $45,000. Most middle- and lower-income households would receive a cut of less than $500 per year, if anything.

The costs of reconciliation

A number of Senate Republicans have questioned some aspects of the reconciliation package. Since they hold only a 53-47 majority, and with all Democrats expected to vote “no,” they need to use reconciliation to pass their version.

Although it differs from the House version in many ways, the Senate version still favors tax cuts for high-income households and large corporations.

Senate Republicans also employ a flawed accounting gimmick to minimize its apparent cost. It assumes the 2017 Trump tax cuts, which are set to expire, have already been extended and embeds that assumption into the budget baseline.

This makes extending the tax cuts appear costless, even though it would grow the debt substantially. The move violates normal scorekeeping conventions and misleads the public. Honest accounting would show that the Senate plan would add to the debt about $500 billion more than the House version.

Abusing the process

Lots of wrangling and changes are expected before the Senate is able to pass its version. After that, the House and Senate will need to resolve their differences in a conference committee of Republicans from each house of Congress.

Once they agree on a final version, each house votes again – and the Senate version will still need to meet the terms of reconciliation in order to pass with a majority vote. President Trump is pressuring Congress to deliver the bill to his desk before he goes on July Fourth vacation.

In my view, while reconciliation remains a powerful budgetary tool, its current use represents a fundamental inversion of its original purpose. Americans deserve an honest debate about trade-offs, rather than more debt in disguise. Some estimates of the fiscal impact of the Senate’s version of the bill are as high as $3.8 trillion over a decade. Simply waving a magic accounting wand won’t make them go away.

This article was updated to include a Senate parliamentarian ruling about several provisions of the Republican bill.

The Conversation

Linda J. Bilmes served as Deputy Assistant Secretary of the US Department of Commerce from 1997-1998 and as CFO and Assistant Secretary for Management, Budget and Administration from 1999-2001.

ref. What is reconciliation − the legislative shortcut Republicans are using to push through their ‘big, beautiful bill’? – https://theconversation.com/what-is-reconciliation-the-legislative-shortcut-republicans-are-using-to-push-through-their-big-beautiful-bill-255487

Why energy markets fluctuate during an international crisis

Source: – By Skip York, Nonresident Fellow in Energy and Global Oil, Baker Institute for Public Policy, Rice University

Stock and commodities traders found themselves dealing with various price swings as energy markets responded to Israeli and U.S. attacks on Iran. Timothy A. Clary/AFP via Getty Imagesf

Global energy markets, such as those for oil, gas and coal, tend to be sensitive to a wide range of world events – especially when there is some sort of crisis. Having worked in the energy industry for over 30 years, I’ve seen how war, political instability, pandemics and economic sanctions can significantly disrupt energy markets and impede them from functioning efficiently.

A look at the basics

First, consider the economic fundamentals of supply and demand. The risk most people imagine in the current crisis between Israel, the U.S. and Iran is that Iran, which is itself a major oil-producing country, might suddenly expand the conflict by threatening the ability of neighboring countries to supply oil to the world.

Oil wells, refineries, pipelines and shipping lanes are the backbone of energy markets. They can be vulnerable during a crisis: Whether there is deliberate sabotage or collateral damage from military action, energy infrastructure often takes a hit.

For instance, after Saddam Hussein invaded Kuwait in August 1990, Iraqi forces placed explosive charges on Kuwaiti oil wells and began detonating them in January 1991. It took months for all the resulting fires to be put out, and millions of barrels of oil and hundreds of millions of cubic meters of natural gas were released into the environment – rather than being sold and used productively somewhere around the world.

Scenes of Kuwaiti life during and after the Gulf War of 1990 and 1991 include images of oil wells burning as a result of Iraqi sabotage.

Logistics can mess markets up too. For instance, closing critical maritime routes like the Strait of Hormuz or the Suez Canal can cause transportation delays.

Whether supply is lost from decreased production or blocked transportation routes, the effect is less oil available to the market, which not only causes prices to rise in general, but it also makes them more volatile – tending to change more frequently and by larger amounts.

On the flip side, demand can also shift radically. During the 1990-1991 Gulf War, demand rose: U.S. forces alone used more than 2 billion gallons of fuel, according to an Army analysis. By contrast, during the COVID-19 pandemic, industries shut down, travel came to a halt and energy demand plummeted.

When crisis looms, countries and companies often start stockpiling oil and other raw materials rather than buying only what they need right now. That creates even more imbalance, resulting in price volatility that leaves everyone, both consumers and producers, with a headache.

Regional considerations

In addition to uncertainties around market fundamentals, it’s important to note that many of the world’s energy reserves are located in regions that have not been models of stability. In the Middle East, wars, revolutions and diplomatic disputes there can raise concerns about supply, demand or both.

Those worries send shock waves through the world’s energy markets. It’s like walking on a tightrope: One wrong move – or even the perception of a misstep – can make the market wobble.

Governments’ economic sanctions, such as those restricting trade with Iran, Russia or Venezuela, can distort production and investment decisions and disrupt trade flows. Sometimes markets react even before sanctions are officially in place: Just the rumor of a possible embargo can cause prices to spike as buyers scramble to secure resources.

In 2008, for example, India and Vietnam imposed rice export bans, and rumors of additional restrictions fueled panic buying and nearly doubled prices in months.

In those scrambles, the role of investor speculation enters the picture. Energy commodities, such as oil and gas, aren’t just physical resources; they’re also traded as financial assets like stocks and bonds. During uncertain times, traders don’t wait around for actual changes in supply and demand. They react to news and forecasts, sometimes in large groups, which can shift the market just with the actions that result from their fears or hopes.

The events on June 22, 2025, are a good example of how this dynamic works. The Iranian parliament passed a resolution authorizing the country’s Supreme Council to close the Strait of Hormuz. Immediately, oil prices started rising, even though the strait was still open, with oil tankers steaming through unimpeded.

The next day, Iran launched a missile strike on Qatar, but coordinated in advance with Qatari officials to minimize damage and casualties. Traders and analysts perceived the action as a de-escalatory signal and anticipated that the Supreme Council was not going to close the strait. So prices started to fall.

It was a price roller coaster, fueled by speculation rather than reality. And computer algorithms and artificial intelligence, which assist in making automated trades, only add to the chaos of price changes.

Shipping activity in the Persian Gulf and the Strait of Hormuz decreased after Israel’s attacks on Iranian nuclear facilities.

A broader look

International crises can also cause wider changes in countries’ economies – or the global economy as a whole – which in turn affect the energy market.

If a crisis sparks a recession, rising inflation or high unemployment, those tend to cause people and businesses to use less energy. When the underlying situation stabilizes, recovery efforts can mean energy consumption resumes. But it’s like a pendulum swinging back and forth, with energy markets caught in the middle.

Renewable energy is not immune to international crisis and chaos. The supply is less affected by market forces: The amount of available sunlight and wind isn’t tied to geopolitical relations. But overall economic conditions still affect demand, and a crisis can disrupt the supply chains for the equipment needed to harness renewable energy, like solar panels and wind turbines.

It’s no wonder energy markets are so jittery during international crises. A mix of imbalances between supply and demand, vulnerable infrastructure, political tensions, corporate worries and speculative trading all weave together into a complex web of volatility.

For policymakers, investors and consumers, understanding these dynamics is key to navigating the ups and downs of energy markets in a crisis-prone world. The solutions aren’t simple, but being informed is the first step toward stability.

The Conversation

Skip York is a nonresident fellow for Global Oil and Energy with the Center for Energy Studies at Rice University’s Baker Institute for Public Policy. He also is the Chief Energy Strategist at Turner Mason & Company, an energy consulting firm.

ref. Why energy markets fluctuate during an international crisis – https://theconversation.com/why-energy-markets-fluctuate-during-an-international-crisis-259839

To spur the construction of affordable, resilient homes, the future is concrete

Source: – By Pablo Moyano Fernández, Assistant Professor of Architecture, Washington University in St. Louis

A modular, precast system of concrete ‘rings’ can be connected in different ways to build a range of models of energy-efficient homes. Pablo Moyano Fernández, CC BY-SA

Wood is, by far, the most common material used in the U.S. for single-family home construction.

But wood construction isn’t engineered for long-term durability, and it often underperforms, particularly in the face of increasingly common extreme weather events.

In response to these challenges, I believe mass-produced concrete homes can offer affordable, resilient housing in the U.S. By leveraging the latest innovations of the precast concrete industry, this type of homebuilding can meet the needs of a changing world.

Wood’s rise to power

Over 90% of the new homes built in the U.S. rely on wood framing.

Wood has deep historical roots as a building material in the U.S., dating back to the earliest European settlers who constructed shelters using the abundant native timber. One of the most recognizable typologies was the log cabin, built from large tree trunks notched at the corners for structural stability.

A mother holds her child in the front doorway of their log cabin home.
Log cabins were popular in the U.S. during the 18th and 19th centuries.
Heritage Art/Heritage Images via Getty Images

In the 1830s, wood construction underwent a significant shift with the introduction of balloon framing. This system used standardized, sawed lumber and mass-produced nails, allowing much smaller wood components to replace the earlier heavy timber frames. It could be assembled by unskilled labor using simple tools, making it both accessible and economical.

In the early 20th century, balloon framing evolved into platform framing, which became the dominant method. By using shorter lumber lengths, platform framing allowed each floor to be built as a separate working platform, simplifying construction and improving its efficiency.

The proliferation and evolution of wood construction helped shape the architectural and cultural identity of the nation. For centuries, wood-framed houses have defined the American idea of home – so much so that, even today, when Americans imagine a house, they typically envision one built of wood.

A row of half-constructed homes surrounded by piles of dirt.
A suburban housing development from the 1950s being built with platform framing.
H. Armstrong Roberts/ClassicStock via Getty Images

Today, light-frame wood construction dominates the U.S. residential market.

Wood is relatively affordable and readily available, offering a cost-effective solution for homebuilding. Contractors are familiar with wood construction techniques. In addition, building codes and regulations have long been tailored to wood-frame systems, further reinforcing their prevalence in the housing industry.

Despite its advantages, wood light-frame construction presents several important limitations. Wood is vulnerable to fire. And in hurricane- and tornado-prone regions, wood-framed homes can be damaged or destroyed.

Wood is also highly susceptible to water-related issues, such as swelling, warping and structural deterioration caused by leaks or flooding. Vulnerability to termites, mold, rot and mildew further compromise the longevity and safety of wood-framed structures, especially in humid or poorly ventilated environments.

The case for concrete

Meanwhile, concrete has revolutionized architecture and engineering over the past century. In my academic work, I’ve studied, written and taught about the material’s many advantages.

The material offers unmatched strength and durability, while also allowing design flexibility and versatility. It’s low-cost and low-maintenance, and it has high thermal mass properties, which refers to the material’s ability to absorb and store heat during the day, and slowly release it during the cooler nights. This can lower heating and cooling costs.

Properly designed concrete enclosures offer exceptional performance against a wide range of hazards. Concrete can withstand fire, flooding, mold, insect infestation, earthquakes, hail, hurricanes and tornadoes.

It’s commonly used for home construction in many parts of the world, such as Europe, Japan, Mexico, Brazil and Argentina, as well as India and other parts of Southeast Asia.

However, despite their multiple benefits, concrete single-family homes are rare in the U.S.

That’s because most concrete structures are built using a process called cast-in-place. In this technique, the concrete is formed and poured directly at the construction site. The method relies on built-in-place molds. After the concrete is cast and cured over several days, the formwork is removed.

This process is labor-intensive and time-consuming, and it often produces considerable waste. This is particularly an issue in the U.S., where labor is more expensive than in other parts of the world. The material and labor cost can be as high as 35% to 60% of the total construction cost.

Portland cement, the binding agent in concrete, requires significant energy to produce, resulting in considerable carbon dioxide emissions. However, this environmental cost is often offset by concrete’s durability and long service life.

Concrete’s design flexibility and structural integrity make it particularly effective for large-scale structures. So in the U.S., you’ll see it used for large commercial buildings, skyscrapers and most highways, bridges, dams and other critical infrastructure projects.

But when it comes to single-family homes, cast-in-place concrete poses challenges to contractors. There are the higher initial construction costs, along with a lack of subcontractor expertise. For these reasons, most builders and contractors stick with what they know: the wood frame.

A new model for home construction

Precast concrete, however, offers a promising alternative.

Unlike cast-in-place concrete, precast systems allow for off-site manufacturing under controlled conditions. This improves the quality of the structure, while also reducing waste and labor.

The CRETE House, a prototype I worked on in 2017 alongside a team at Washington University in St. Louis, showed the advantages of a precast home construction.

To build the precast concrete home, we used ultra-high-performance concrete, one of the latest advances in the concrete industry. Compared with conventional concrete, it’s about six times stronger, virtually impermeable and more resistant to freeze-thaw cycles. Ultra-high-performance concrete can last several hundred years.

The strength of the CRETE House was tested by shooting a piece of wood at 120 mph (193 kph) to simulate flying debris from an F5 tornado. It was unable to breach the wall, which was only 2 inches (5.1 centimeters) thick.

The wall of the CRETE House was able to withstand a piece of wood fired at 120 mph (193 kph).

Building on the success of the CRETE House, I designed the Compact House as a solution for affordable, resilient housing. The house consists of a modular, precast concrete system of “rings” that can be connected to form the entire structure – floors, walls and roofs – creating airtight, energy-efficient homes. A series of different rings can be chosen from a catalog to deliver different models that can range in size from 270 to 990 square feet (25 to 84 square meters).

The precast rings can be transported on flatbed trailers and assembled into a unit in a single day, drastically reducing on-site labor, time and cost.

Since they’re built using durable concrete forms, the house can be easily mass-produced. When precast concrete homes are mass-produced, the cost can be competitive with traditional wood-framed homes. Furthermore, the homes are designed to last far beyond 100 years – much longer than typical wood structures – while significantly lowering utility bills, maintenance expenses and insurance premiums.

The project is also envisioned as an open-source design. This means that the molds – which are expensive – are available for any precast producer to use and modify.

A computer graphic showing a prototype of a small, concrete home.
The Compact House is made using ultra-high-performance concrete.
Pablo Moyano Fernández, CC BY-SA

Leveraging a network that’s already in place

Two key limitations of precast concrete construction are the size and weight of the components and the distance to the project site.

Precast elements must comply with standard transportation regulations, which impose restrictions on both size and weight in order to pass under bridges and prevent road damage. As a result, components are typically limited to dimensions that can be safely and legally transported by truck. Each of the Compact House’s pieces are small enough to be transported in standard trailers.

Additionally, transportation costs become a major factor beyond a certain range. In general, the practical delivery radius from a precast plant to a construction site is 500 miles (805 kilometers). Anything beyond that becomes economically unfeasible.

However, the infrastructure to build precast concrete homes is already largely in place. Since precast concrete is often used for office buildings, schools, parking complexes and large apartments buildings, there’s already an extensive national network of manufacturing plants capable of producing and delivering components within that 500-mile radius.

There are other approaches to build homes with concrete: Homes can use concrete masonry units, which are similar to cinder blocks. This is a common technique around the world. Insulated concrete forms involve rigid foam blocks that are stacked like Lego bricks and are then filled with poured concrete, creating a structure with built-in insulation. And there’s even 3D-printed concrete, a rapidly evolving technology that is in its early stages of development.

However, none of these use precast concrete modules – the rings in my prototypes – and therefore require substantially longer on-site time and labor.

To me, precast concrete homes offer a compelling vision for the future of affordable housing. They signal a generational shift away from short-term construction and toward long-term value – redefining what it means to build for resilience, efficiency and equity in housing.

A bird's-eye view of a computer-generated neighborhood featuring plots of land with multiple concrete homes located on them.
An image of North St. Louis, taken from Google Earth, showing how vacant land can be repurposed using precast concrete homes.
Pablo Moyano Fernández, CC BY-SA

This article is part of a series centered on envisioning ways to deal with the housing crisis.

The Conversation

Pablo Moyano Fernández does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. To spur the construction of affordable, resilient homes, the future is concrete – https://theconversation.com/to-spur-the-construction-of-affordable-resilient-homes-the-future-is-concrete-254561

3D-printed model of a 500-year-old prosthetic hand hints at life of a Renaissance amputee

Source: – By Heidi Hausse, Associate Professor of History, Auburn University

Technology is more than just mechanisms and design — it’s ultimately about people.
Adriene Simon/College of Liberal Arts, Auburn University, CC BY-SA

To think about an artificial limb is to think about a person. It’s an object of touch and motion made to be used, one that attaches to the body and interacts with its user’s world.

Historical artifacts of prosthetic limbs are far removed from this lived context. Their users are gone. They are damaged – deteriorated by time and exposure to the elements. They are motionless, kept on display or in museum storage.

Yet, such artifacts are rare direct sources into the lives of historical amputees. We focus on the tools amputees used in 16th- and 17th-century Europe. There are few records written from amputees’ perspectives at that time, and those that exist say little about what everyday life with a prosthesis was like.

Engineering offers historians new tools to examine physical evidence. This is particularly important for the study of early modern mechanical hands, a new kind of prosthetic technology that appeared at the turn of the 16th century. Most of the artifacts are of unknown provenance. Many work only partially and some not at all. Their practical functions remain a mystery.

But computer-aided design software can help scholars reconstruct the artifacts’ internal mechanisms. This, in turn, helps us understand how the objects once moved.

Even more exciting, 3D printing lets scholars create physical models. Rather than imagining how a Renaissance prosthesis worked, scholars can physically test one. It’s a form of investigation that opens new possibilities for exploring the development of prosthetic technology and user experience through the centuries. It creates a trail of breadcrumbs that can bring us closer to the everyday experiences of premodern amputees.

But what does this work, which brings together two very different fields, look like in action?

What follows is a glimpse into our experience of collaboration on a team of historians and engineers, told through the story of one week. Working together, we shared a model of a 16th-century prosthesis with the public and learned a lesson about humans and technology in the process.

A historian encounters a broken model

THE HISTORIAN: On a cloudy day in late March, I walked into the University of Alabama Birmingham’s Center for Teaching and Learning holding a weatherproof case and brimming with excitement. Nestled within the case’s foam inserts was a functioning 3D-printed model of a 500-year-old prosthetic hand.

Fifteen minutes later, it broke.

Mechanical hand with plastic orange fingers extending from a plastic gray palm and wrist
This 3D-printed model of a 16th-century hand prosthesis has working mechanisms.
Heidi Hausse, CC BY-SA

For two years, my team of historians and engineers at Auburn University had worked tirelessly to turn an idea – recreating the mechanisms of a 16th-century artifact from Germany – into reality. The original iron prosthesis, the Kassel Hand, is one of approximately 35 from Renaissance Europe known today.

As an early modern historian who studies these artifacts, I work with a mechanical engineer, Chad Rose, to find new ways to explore them. The Kassel Hand is our case study. Our goal is to learn more about the life of the unknown person who used this artifact 500 years ago.

Using 3D-printed models, we’ve run experiments to test what kinds of activities its user could have performed with it. We modeled in inexpensive polylactic acid – plastic – to make this fragile artifact accessible to anyone with a consumer-grade 3D printer. But before sharing our files with the public, we needed to see how the model fared when others handled it.

An invitation to guest lecture on our experiments in Birmingham was our opportunity to do just that.

We brought two models. The main release lever broke first in one and then the other. This lever has an interior triangular plate connected to a thin rod that juts out of the wrist like a trigger. After pressing the fingers into a locked position, pulling the trigger is the only way to free them. If it breaks, the fingers become stuck.

Close-up of the interior mechanism of a 3D-printed prosthetic, the broken lever raised straight up
The thin rod of the main release lever snapped in this model.
Heidi Hausse, CC BY-SA

I was baffled. During testing, the model had lifted a 20-pound simulation of a chest lid by its fingertips. Yet, the first time we shared it with a general audience, a mechanism that had never broken in testing simply snapped.

Was it a printing error? Material defect? Design flaw?

We consulted our Hand Whisperer: our lead student engineer whose feel for how the model works appears at times preternatural.

An engineer becomes a hand whisperer

THE ENGINEER: I was sitting at my desk in Auburn’s mechanical engineering 3D print lab when I heard the news.

As a mechanical engineering graduate student concentrating on additive manufacturing, commonly known as 3D printing, I explore how to use this technology to reconstruct historical mechanisms. Over the two years I’ve worked on this project, I’ve come to know the Kassel Hand model well. As we fine-tuned designs, I’ve created and edited its computer-aided design files – the digital 3D constructions of the model – and printed and assembled its parts countless times.

Computer illustration of open hand model
This view of the computer-aided design file of a strengthened version of the model, which includes ribs and fillets to reinforce the plastic material, highlights the main release lever in orange.
Peden Jones, CC BY-SA

Examining parts midassembly is a crucial checkpoint for our prototypes. This quality control catches, corrects and prevents any defects, such as misprinted or damaged parts. It’s crucial for creating consistent and repeatable experiments. A new model version or component change never leaves the lab without passing rigorous inspection. This process means there are ways this model has behaved over time that the rest of the team has never seen. But I have.

So when I heard the release lever had broken in Birmingham, it was just another Thursday. While it had never snapped when we tested the model on people, I’d seen it break plenty of times while performing checks on components.

Disassembled hand model
Our model reconstructs the Kassel Hand’s original metal mechanisms in plastic.
Heidi Hausse, CC BY-SA

After all, the model is made from relatively weak polylactic acid. Perhaps the most difficult part of our work is making a plastic model as durable as possible while keeping it visually consistent with the 500-year-old original. The iron rod of the artifact’s lever can handle more force than our plastic version, at least five times the yield strength.

I suspected the lever had snapped because people pulled the trigger too far back and too quickly. The challenge, then, was to prevent this. But redesigning the lever to be thicker or a different shape would make it less like the historical artifact.

This raised the question: Why could I use the model without breaking the lever, but no one else could?

The team makes a plan

THE TEAM: A flurry of discussion led to growing consensus – the crux of the issue was not the model, it was the user.

The original Kassel Hand’s wearer would have learned to use their prosthesis through practice. Likewise, our team had learned to use the model over time. Through the process of design and development, prototyping and printing, we were inadvertently practicing how to operate it.

We needed to teach others to do the same. And this called for a two-pronged approach.

Perspective on using the Kassel Hand, as a modern prosthetist.

The engineers reexamined the opening through which the release trigger poked out of the model. They proposed shortening it to limit how far back users could pull it. When we checked how this change would affect the model’s accuracy, we found that a smaller opening was actually closer to the artifact’s dimensions. While the larger opening had been necessary for an earlier version of the release lever that needed to travel farther, now it only caused problems. The engineers got to work.

The historians, meanwhile, created plans to document and share the various techniques to operating the model the team hadn’t realized it had honed. To teach someone at home how to operate their own copy, we filmed a short video explaining how to lock and release the fingers and troubleshoot when a finger sticks.

Testing the plan

Exactly one week after what we called “the Birmingham Break,” we shared the model with a general audience again. This time we visited a colleague’s history class at Auburn.

We brought four copies. Each had an insert to shorten the opening around the trigger. First, we played our new instructional video on a projector. Then we turned the models over to the students to try.

Four mechanical hand models on display, each slightly different in design
The team brought these four models with inserts to shorten the opening below the release trigger to test with a general audience of undergraduate and graduate students.
Heidi Hausse, CC BY-SA

The result? Not a single broken lever. We publicly launched the project on schedule.

The process of introducing the Kassel Hand model to the public highlights that just as the 16th-century amputee who wore the artifact had to learn to use it, one must learn to use the 3D-printed model, too.

It is a potent reminder that technology is not just a matter of mechanisms and design. It is fundamentally about people – and how people use it.

The Conversation

Heidi Hausse received funding from the Herzog August Bibliothek; the Consortium for History of Science, Technology and Medicine; the American Council of Learned Societies; the Huntington Library; the Society of Fellows in the Humanities at Columbia University; and the Renaissance Society of America.

Peden Jones received funding from Renaissance Society of America.

ref. 3D-printed model of a 500-year-old prosthetic hand hints at life of a Renaissance amputee – https://theconversation.com/3d-printed-model-of-a-500-year-old-prosthetic-hand-hints-at-life-of-a-renaissance-amputee-256670

Trump administration aims to slash funds that preserve the nation’s rich architectural and cultural history

Source: – By Michael R. Allen, Visiting Assistant Professor of History, West Virginia University

The iconic ‘Walking Man’ Hawkes sign in Westbrook, Maine, was added to the National Register of Historic Places in 2019. Ben McCanna/Portland Portland Press Herald via Getty Images

President Donald Trump’s proposed fiscal year 2026 discretionary budget is called a “skinny budget” because it’s short on line-by-line details.

But historic preservation efforts in the U.S. did get a mention – and they might as well be skinned to the bone.

Trump has proposed to slash funding for the federal Historic Preservation Fund to only $11 million, which is $158 million less than the fund’s previous reauthorization in 2024. The presidential discretionary budget, however, always heads to Congress for appropriation. And Congress always makes changes.

That said, the Trump administration hasn’t even released the $188 million that Congress appropriated for the fund for the 2025 fiscal year, essentially impounding the funding stream that Congress created in 1976 for historic preservation activities across the nation.

I’m a scholar of historic preservation who’s worked to secure historic designations for buildings and entire neighborhoods. I’ve worked on projects that range from making distressed neighborhoods in St. Louis eligible for historic tax credits to surveying Cold War-era hangars and buildings on seven U.S. Air Force bases.

I’ve seen the ways in which the Historic Preservation Fund helps local communities maintain and rehabilitate their rich architectural history, sparing it from deterioration, the wrecking ball or the pressures of the private market.

A rare, deficit-neutral funding model

Most Americans probably don’t realize that the task of historic preservation largely falls to individual states and Native American tribes.

The National Historic Preservation Act that President Lyndon B. Johnson signed into law in 1966 requires states and tribes to handle everything from identifying potential historic sites to reviewing the impact of interstate highway projects on archaeological sites and historic buildings. States and tribes are also responsible for reviewing nominations of sites in the National Register of Historic Places, the nation’s official list of properties deemed worthy of preservation.

However, many states and tribes didn’t have the capacity to adequately tackle the mandates of the 1966 act. So the Historic Preservation Fund was formed a decade later to alleviate these costs by funneling federal resources into these efforts.

The fund is actually the product of a conservative, limited-government approach.

Created during Gerald Ford’s administration, it has a revenue-neutral model, meaning that no tax dollars pay for the program. Instead, it’s funded by private lease royalties from the Outer Continental Shelf oil and gas reserves.

Most of these reserves are located in federal waters in the Gulf of Mexico and off the coast of Alaska. Private companies that receive a permit to extract from them must agree to a lease with the federal government. Royalties from their oil and gas sales accrue in federally controlled accounts under the terms of these leases. The Office of Natural Resources Revenue then directs 1.5% of the total royalties to the Historic Preservation Fund.

Congress must continually reauthorize the amount of funding reserved for the Historic Preservation Fund, or it goes unfunded.

A plaque honoring Fenway Park is displayed on an easel on a baseball field.
Boston’s Fenway Park was added to the National Register of Historic Places in 2012, making it eligible for preservation grants and federal tax incentives.
Winslow Townson/Getty Images

Despite bipartisan support, the fund has been threatened in the past. President Ronald Reagan attempted to do exactly what Trump is doing now by making no request for funding at all in his 1983 budget. Yet the fund has nonetheless been reauthorized six times since its inception, with terms ranging from five to 10 years.

The program is a crucial source of funding, particularly in small towns and rural America, where privately raised cultural heritage funds are harder to come by. It provides grants for the preservation of buildings and geographical areas that hold historical, cultural or spiritual significance in underrepresented communities. And it’s even involved in projects tied to the nation’s 250th birthday in 2026, such as the rehabilitation of the home in New Jersey where George Washington was stationed during the winter of 1778-79 and the restoration of Rhode Island’s Old State House.

Filling financial gaps

I’ve witnessed the fund’s impact firsthand in small communities across the nation.

Edwardsville, Illinois, a suburb of St. Louis, is home to the Leclaire Historic District. In the 1970s, it was added to the National Register of Historic Places. The national designation recognized the historic significance of the district, protecting it against any adverse impacts from federal infrastructure funding. It also made tax credits available to the town. Edwardsville then designated LeClaire a local historic district so that it could legally protect the indelible architectural features of its homes, from original decorative details to the layouts of front porches.

Despite the designation, however, there was no clear inventory of the hundreds of houses in the district. A few paid staffers and a volunteer citizen commission not only had to review proposed renovations and demolitions, but they also had to figure out which buildings even contributed to LeClaire’s significance and which ones did not – and thus did not need to be tied up in red tape.

Black and white photo of family standing in front of their home.
The Allen House is one of approximately 415 single-family homes in the Leclaire neighborhood in Edwardsville, Ill.
Friends of Leclaire

Edwardsville was able to secure a grant through the Illinois State Historic Preservation Office thanks to a funding match enabled by money disbursed to Illinois via the Historic Preservation Fund.

In 2013, my team created an updated inventory of the historic district, making it easier for the local commission to determine which houses should be reviewed carefully and which ones don’t need to be reviewed at all.

Oil money better than no money

The historic preservation field, not surprisingly, has come out strongly against Trump’s proposal to defund the Historic Preservation Fund.

Nonetheless, there have been debates within the field over the fund’s dependence on the fossil fuel industry, which was the trade-off that preservationists made decades ago when they crafted the funding model.

In the 1970s, amid the national energy crisis, conservation of existing buildings was seen as a worthy ecological goal, since demolition and new construction required fossil fuels. To preservationists, diverting federal carbon royalties seemed like a power play.

But with the effects of climate change becoming impossible to ignore, some preservationists are starting to more openly critique both the ethics and the wisdom of tapping into a pool of money created through the profits of the oil and gas industry. I’ve recently wondered myself if continued depletion of fossil fuels means that preservationists won’t be able to count on the Historic Preservation Fund as a long-term source of funding.

That said, you’d be hard-pressed to find a preservationist who thinks that destroying the Historic Preservation Fund would be a good first step in shaping a more visionary policy.

For now, Trump’s administration has only sown chaos in the field of historic preservation. Already, Ohio has laid off one-third of the staffers in its State Historic Preservation Office due to the impoundment of federal funds. More state preservation offices may follow suit. The National Council of State Historic Preservation Officers predicts that states soon could be unable to perform their federally mandated duties.

Unfortunately, many people advocating for places important to their towns and neighborhoods may end up learning the hard way just what the Historic Preservation Fund does.

The Conversation

Michael R. Allen is a member of the Advisor Leadership Team of the National Trust for Historic Preservation.

ref. Trump administration aims to slash funds that preserve the nation’s rich architectural and cultural history – https://theconversation.com/trump-administration-aims-to-slash-funds-that-preserve-the-nations-rich-architectural-and-cultural-history-258889

What if universal rental assistance were implemented to deal with the housing crisis?

Source: – By Alex Schwartz, Professor of Urban Policy, The New School

Thousands of American families that can’t find affordable apartments are stuck living in extended-stay motels. Michael S. Williamson/The Washington Post via Getty Images

If there’s one thing that U.S. politicians and activists from across the political spectrum can agree on, it’s that rents are far too high.

Many experts believe that this crisis is fueled by a shortage of housing, caused principally by restrictive regulations.

Rents and home prices would fall, the argument goes, if rules such as minimum lot- and house-size requirements and prohibitions against apartment complexes were relaxed. This, in turn, would make it easier to build more housing.

As experts on housing policy, we’re concerned about housing affordability. But our research shows little connection between a shortfall of housing and rental affordability problems. Even a massive infusion of new housing would not shrink housing costs enough to solve the crisis, as rents would likely remain out of reach for many households.

However, there are already subsidies in place that ensure that some renters in the U.S. pay no more than 30% of their income on housing costs. The most effective solution, in our view, is to make these subsidies much more widely available.

A financial sinkhole

Just how expensive are rents in the U.S.?

According to the U.S. Department of Housing and Urban Development, a household that spends more than 30% of its income on housing is deemed to be cost-burdened. If it spends more than 50%, it’s considered severely burdened. In 2023, 54% of all renters spent more than 30% of their pretax income on housing. That’s up from 43% of renters in 1999. And 28% of all renters spent more than half their income on housing in 2023.

Renters with low incomes are especially unlikely to afford their housing: 81% of renters making less than $30,000 spent more than 30% of their income on housing, and 60% spent more than 50%.

Estimates of the nation’s housing shortage vary widely, reaching up to 20 million units, depending on analytic approach and the time period covered. Yet our research, which compares growth in the housing stock from 2000 to the present, finds no evidence of an overall shortage of housing units. Rather, we see a gap between the number of low-income households and the number of affordable housing units available to them; more affluent renters face no such shortage. This is true in the nation as a whole and in nearly all large and small metropolitan areas.

Would lower rents help? Certainly. But they wouldn’t fix everything.

We ran a simulation to test an admittedly unlikely scenario: What if rents dropped 25% across the board? We found it would reduce the number of cost-burdened renters – but not by as much as you might think.

Even with the reduction, nearly one-third of all renters would still spend more than 30% of their income on housing. Moreover, reducing rents would help affluent renters much more than those with lower incomes – the households that face the most severe affordability challenges.

The proportion of cost-burdened renters earning more than $75,000 would fall from 16% to 4%, while the share of similarly burdened renters earning less than $15,000 would drop from 89% to just 80%. Even with a rent rollback of 25%, the majority of renters earning less than $30,000 would remain cost-burdened.

Vouchers offer more breathing room

Meanwhile, there’s a proven way of making housing more affordable: rental subsidies.

In 2024, the U.S. provided what are known as “deep” housing subsidies to about 5 million households, meaning that rent payments are capped at 30% of their income.

These subsidies take three forms: Housing Choice Vouchers that enable people to rent homes in the private market; public housing; and project-based rental assistance, in which the federal government subsidizes the rents for all or some of the units in properties under private and nonprofit ownership.

The number of households participating in these three programs has increased by less than 2% since 2014, and they constitute only 25% of all eligible households. Households earning less than 50% of their area’s median family income are eligible for rental assistance. But unlike Social Security, Medicare or food stamps, rental assistance is not an entitlement available to all who qualify. The number of recipients is limited by the amount of funding appropriated each year by Congress, and this funding has never been sufficient to meet the need.

By expanding rental assistance to all eligible low-income households, the government could make huge headway in solving the rental affordability crisis. The most obvious option would be to expand the existing Housing Choice Voucher program, also known as Section 8.

The program helps pay the rent up to a specified “payment standard” determined by each local public housing authority, which can set this standard at between 80% and 120% of the HUD-designated fair market rent. To be eligible for the program, units must also satisfy HUD’s physical quality standards.

Unfortunately, about 43% of voucher recipients are unable to use it. They are either unable to find an apartment that rents for less than the payment standard, meets the physical quality standard, or has a landlord willing to accept vouchers.

Renters are more likely to find housing using vouchers in cities and states where it’s illegal for landlords to discriminate against voucher holders. Programs that provide housing counseling and landlord outreach and support have also improved outcomes for voucher recipients.

However, it might be more effective to forgo the voucher program altogether and simply give eligible households cash to cover their housing costs. The Philadelphia Housing Authority is currently testing out this approach.

The idea is that landlords would be less likely to reject applicants receiving government support if the bureaucratic hurdles were eliminated. The downside of this approach is that it would not prevent landlords from renting out deficient units that the voucher program would normally reject.

Homeowners get subsidies – why not renters?

Expanding rental assistance to all eligible low-income households would be costly.

The Urban Institute, a nonpartisan think tank, estimates it would cost about $118 billion a year.

However, Congress has spent similar sums on housing subsidies before. But they involve tax breaks for homeowners, not low-income renters. Congress forgoes billions of dollars annually in tax revenue it would otherwise collect were it not for tax deductions, credits, exclusions and exemptions. These are known as tax expenditures. A tax not collected is equivalent to a subsidy payment.

Silhouette of older man standing at sliding glass door.
Only about 25% of eligiblge households receive rental assistance from the federal government.
Luis Sinco/Los Angeles Times via Getty Images

For example, from 1998 through 2017 – prior to the tax changes enacted by the first Trump administration in 2017 – the federal government annually sacrificed $187 billion on average, after inflation, in revenue due to mortgage interest deductions, deductions for state and local taxes, and for the exemption of proceeds from the sale of one’s home from capital gains taxes. In fiscal year 2025, these tax expenditures totaled $95.4 billion.

Moreover, tax expenditures on behalf of homeowners flow mostly to higher-income households. In 2024, for example, over 70% of all mortgage-interest tax deductions went to homeowners earning at least $200,000.

Broadening the availability of rental subsidies would have other benefits. It would save federal, state and local governments billions of dollars in homeless services. Moreover, automatic provision of rental subsidies would reduce the need for additional subsidies to finance new affordable housing. Universal rental assistance, by guaranteeing sufficient rental income, would allow builders to more easily obtain loans to cover development costs.

Of course, sharply raising federal expenditures for low-income rental assistance flies in the face of the Trump administration’s priorities. Its budget proposal for the next fiscal year calls for a 44% cut of more than $27 billion in rental assistance and public housing.

On the other hand, if the government supported rental assistance in amounts commensurate with the tax benefits given to homeowners, it would go a long way toward resolving the rental housing affordability crisis.

This article is part of a series centered on envisioning ways to deal with the housing crisis.

The Conversation

Alex Schwartz has received funding from the Catherine and John D. MacArthur Foundation. Since 2019 he has served on New York City’s Rent Guidelines Board. He has a relative who works for The Conversation.

Kirk McClure received funding from the U.S. Department of Housing and Urban Development and receives funding from the National Science Foundation.

ref. What if universal rental assistance were implemented to deal with the housing crisis? – https://theconversation.com/what-if-universal-rental-assistance-were-implemented-to-deal-with-the-housing-crisis-257213

Michelin Guide scrutiny could boost Philly tourism, but will it stifle chefs’ freedom to experiment and innovate?

Source: – By Jonathan Deutsch, Professor of Food and Hospitality Management, Drexel University

Chef Phila Lorn prepares a bowl of noodle soup at Mawn restaurant in Philadelphia. AP Photo/Matt Rourke

The Philadelphia restaurant scene is abuzz with the news that the famed Michelin Guide is coming to town.

As a research chef and educator at Drexel University in Philadelphia, I am following the Michelin developments closely.

Having eaten in Michelin restaurants in other cities, I am confident that Philly has at least a few star-worthy restaurants. Our innovative dining scene was named one of the top 10 in the U.S. by Food & Wine in 2025.

Researchers have convincingly shown that Michelin ratings can boost tourism, so Philly gaining some starred restaurants could bring more revenue for the city.

But as the lead author of the textbook “Culinary Improvisation,” which teaches creativity, I also worry the Michelin scrutiny could make chefs more focused on delivering a consistent experience than continuing along the innovative trajectory that attracts Michelin in the first place.

Ingredients for culinary innovation

In “Culinary Improvisation” we discuss three elements needed to foster innovation in the kitchen.

The first is mastery of culinary technique, both classical and modern. Simply stated, this refers to good cooking.

The second is access to a diverse range of ingredients and flavors. The more colors the artist has on their palette, the more directions the creation can take.

And the third, which is key to my concerns, is a collaborative and supportive environment where chefs can take risks and make mistakes. Research shows a close link between risk-taking workplaces and innovation.

According to the Michelin Guide, stars are awarded to outstanding restaurants based on: “quality of ingredients, mastery of cooking techniques and flavors, the personality of the chef as expressed in the cuisine, value for money, and consistency of the dining experience both across the menu and over time.”

The criteria do not mention innovation.

It’s possible the high-stakes lure of a Michelin star, which awards consistent excellence, could lead Philly’s most vibrant and creative chefs and restaurateurs to pull back on the risks that led to the city’s culinary excellence in the first place.

A line of chefs wearing black aprons at work in an open kitchen.
Local food writers believe Vernick Fish is a top contender for a Michelin star.
Photo courtesy of Vernick Fish

The obvious contenders

Philadelphia’s preeminent restaurant critic Craig LaBan and journalist and former restaurateur Kiki Aranita discussed local contenders for Michelin stars in a recent article in the Philadelphia Inquirer.

The 19 restaurants LaBan and Aranita discuss as possible star contenders average just over a one-mile walk from the Pennsylvania Convention Center.

Together they have received 78 James Beard nominations or awards, which are considered the “Oscars” of the food industry. That’s an average of over four per restaurant.

And when I tried to book a table for two on a Wednesday and Saturday before 9 p.m., about half were already fully booked for dinner two weeks out, in July, which is the slow season for dining in Philadelphia.

If LaBan’s and Aranita’s predictions are right, Michelin will be an added recognition for restaurants that are already successful and centrally located.

Exterior shot of a restaurant with outdoor seating in ground floor of rowhome
Black Dragon Takeout fuses Black American cuisine with the aesthetics of classic Chinese American takeout.
Jeff Fusco/The Conversation, CC BY-SA

Off the beaten path

When the Michelin Guide started in France at the turn of the 19th century, it encouraged diners to take the road less traveled to their next gastronomic experience.

It has since evolved into recommendations for a road well traveled: safe, lauded and already hard-to-get-into restaurants. In Philly these could be restaurants such as Vetri Cucina, Zahav, Vernick Fish, Provenance, Royal Sushi and Izakaya, Ogawa and Friday Saturday Sunday, to name a few on LaBan and Aranita’s list.

And yet Philadelphia has over 6,000 restaurants spread across 135 square miles of the city. Philadelphia is known as a city of neighborhoods, and these neighborhoods are rich with food diversity and innovation.

Consider Jacob Trinh’s Vietnamese-tinged seafood tasting menu at Little Fish in Queen Village; Kurt Evans’ gumbo lo mein at Black Dragon Takeout in West Philly; the beef cheek confit with avocado mousse at Temir Satybaldiev’s Ginger in the Northeast; and the West African XO sauce at Honeysuckle, owned by Omar Tate and Cybille St.Aude-Tate, on North Broad Street.

I hope the Michelin inspectors will venture far beyond the obvious candidates to experience more of what Philadelphia has to offer.

Small stacks of red hardback books that say 'Michelin France 2025'
The Michelin Guide announced it will include Philadelphia and Boston in its next Northeast Cities edition.
Matthieu Delaty/Hans Lucas/AFP via Getty Images

Raising the bar

In the frenzy surrounding the Michelin scrutiny, chef friends have invited me to dine at their restaurants and share my feedback as they refine their menus in anticipation of visits from anonymous Michelin inspectors.

Restaurateurs have been asking my colleagues and me for talent suggestions to replace well-liked and capable cooks, servers and managers whom owners perceive to be just not Michelin-star level.

And managers are texting us names of suspected reviewers, triggered by some tell-tale signs – a solo diner with a weeknight tasting menu reservation, no dietary restrictions or special requests, and a conspicuously light internet presence.

In all, I am excited about Philadelphians being excited about Michelin. Any opportunity to spotlight the city’s restaurant community and tighten its food and service quality raises the bar among local chefs and restaurateurs and makes the experience better for diners. And the prospect of business travelers and culinary tourists enjoying lunches and early-week dinners can help restaurants, their workers and the city earn more revenue.

But in the din of the press events and hype, let’s not forget that Philadelphians don’t need an outside arbiter to tell us what we already know: Philly is a great place to eat and drink.

Read more of our stories about Philadelphia.

The Conversation

Jonathan Deutsch does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Michelin Guide scrutiny could boost Philly tourism, but will it stifle chefs’ freedom to experiment and innovate? – https://theconversation.com/michelin-guide-scrutiny-could-boost-philly-tourism-but-will-it-stifle-chefs-freedom-to-experiment-and-innovate-256752

Philly psychology students map out local landmarks and hidden destinations where they feel happiest

Source: – By Eric Zillmer, Professor of Neuropsychology, Drexel University

Rittenhouse Square Park in Center City made it onto the Philly Happiness Map. Matthew Lovette/Jumping Rocks/Universal Images Group via Getty Images

What makes you happy? Perhaps a good night’s sleep, or a wonderful meal with friends?

I am the director of the Happiness Lab at Drexel University, where I also teach a course on happiness. The Happiness Lab is a think tank that investigates the ingredients that contribute to people’s happiness.

Often, my students ask me something along the lines of, “Dr. Z, tell us one thing that will make us happier.”

As a first step, I advise them to spend more time outside.

Achieving lasting and sustainable happiness is more complicated. Research on the happiest countries in the world and the places where people live the longest, known as Blue Zones, shows a common thread: Residents feel they are part of something larger than themselves, such as a community or a city.

So if you’re living in a metropolis like Philadelphia, where, incidentally, the iconic pursuit of happiness charge was ratified in the Declaration of Independence, I believe urban citizenship – that is, forming an identity with your urban surroundings – should also be on your list.

A small boat floats in blue-green waters in front of a picturesque village.
The Greek island of Ikaria in the Aegean Sea is a Blue Zone famous for its residents’ longevity.
Nicolas Economou/NurPhoto via Getty Images

Safety, social connection, beauty

Carl Jung, the renowned Swiss psychoanalyst, wrote extensively about the relationship between our internal world and our external environment.

He believed that this relationship was crucial to our psychological well-being.

More recent research in neuroscience and functional imaging has revealed a vast, intricate and complex neurological architecture underlying our psychological perception of a place. Numerous neurological pathways and functional loops transform a complex neuropsychological process into a simple realization: I am happy here!

For example, a happy place should feel safe.

The country of Croatia, a tourist haven for its beauty and culinary delights, is also one of the top 20 safest countries globally, according to the 2025 Global Peace Index.

The U.S. ranks 128th.

The availability of good food and drink can also be a significant factor in creating a happy place.

However, according to American psychologist Abraham Maslow, a pioneer in the field of positive psychology, the opportunity for social connectivity, experiencing something meaningful and having a sense of belonging is more crucial.

Furthermore, research on happy places suggests that they are beautiful. It should not come as a surprise that the happiest places in the world are also drop-dead gorgeous, such as the Indian Ocean archipelago of Mauritius, which is the happiest country in Africa, according to the 2025 World Happiness Report from the University of Oxford and others.

Happy places often provide access to nature and promote active lifestyles, which can help relieve stress. The residents of the island of Ikaria in Greece, for example, one of the original Blue Zones, demonstrate high levels of physical activity and social interaction.

A Google map display on right with a list of mapped locations on the left.
A map of 28 happy places in Philadelphia, based on 243 survey responses from Drexel students.
The Happiness Lab at Drexel University

Philly Happiness Map

I asked my undergraduate psychology students at Drexel, many of whom come from other cities, states and countries, to pick one place in Philadelphia where they feel happy.

From the 243 student responses, the Happiness Lab curated 28 Philly happy places, based on how frequently the places were endorsed and their accessibility.

Philadelphia’s founder, William Penn, would likely approve that Rittenhouse Square Park and three other public squares – Logan, Franklin and Washington – were included. These squares were vital to Penn’s vision of landscaped public parks to promote the health of the mind and body by providing “salubrious spaces similar to the private garden.” They are beautiful and approachable, serving as “places to rest, take a pause, work, or read a book,” one student told us.

Places such as the Philadelphia Zoo, Penn’s Landing and the Philadelphia Museum of Art are “joyful spots that are fun to explore, and one can also take your parents along if need be,” as another student described.

The Athenaeum of Philadelphia, a historic library with eclectic programming, feels to one student like “coming home, a perfect third place.”

Some students mentioned happy places that are less known. These include tucked-away gardens such as the John F. Collings Park at 1707 Chestnut St., the rooftop Cira Green at 129 S. 30th St. and the James G. Kaskey Memorial Park and BioPond at 433 S. University Ave.

A stone-lined brick path extends through a nicely landscaped outdoor garden area.
The James G. Kaskey Memorial Park and BioPond in West Philadelphia is an urban oasis.
M. Fischetti for Visit Philadelphia

My students said these are small, unexpected spots that provide an excellent opportunity for a quiet, peaceful break, to be present, whether enjoyed alone or with a friend. I checked them out and I agree.

The students also mentioned places I had never heard of even though I’ve lived in the city for over 30 years.

The “cat park” at 526 N. Natrona St. in Mantua is a quiet little park with an eclectic personality and lots of friendly cats.

Mango Mango Dessert at 1013 Cherry St. in Chinatown, which is a frequently endorsed happiness spot among the students because of its “bustling streets, lively atmosphere and delicious food,” is a perfect pit stop for mango lovers. And Maison Sweet, at 2930 Chestnut St. in University City, is a casual bakery and cafe “where you may end up staying longer than planned,” one student shared.

I find that Philly’s happy places, as seen through the eyes of college students, tend to offer a space for residents to take time out from their day to pause, reset, relax and feel more connected and in touch with the city.

Happiness principals are universal, yet our own journeys are very personal. Philadelphians across the city may have their own list of happy places. There are really no right or wrong answers. If you don’t have a personal happy space, just start exploring and you may be surprised what you will find, including a new sense of happiness.

See the full Philly Happiness Map list here, and visit the exhibit at the W.W. Hagerty Library at Drexel University to learn more.

Read more of our stories about Philadelphia.

The Conversation

Eric Zillmer does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Philly psychology students map out local landmarks and hidden destinations where they feel happiest – https://theconversation.com/philly-psychology-students-map-out-local-landmarks-and-hidden-destinations-where-they-feel-happiest-258790

A preservative removed from childhood vaccines 20 years ago is still causing controversy today − a drug safety expert explains

Source: – By Terri Levien, Professor of Pharmacy, Washington State University

A discredited study published in 1989 first alleged a link between thimerosal and autism. Flavio Coelho/Moment via Getty Images

An expert committee that advises the Centers for Disease Control and Prevention voted on June 26, 2025, to cease recommending the use of a mercury-based chemical called thimerosal in flu vaccines. Only a small number of flu vaccines – ones that are produced in multi-dose vials – currently contain thimerosal.

Thimerosal is almost never used in vaccines anymore, but vaccine skeptics have falsely claimed it carries health risks to the brain. Public health experts have raised concerns that the committee’s action against thimerosal may shake public trust and sow confusion about the safety of vaccines.

The committee, called the Advisory Committee on Immunization Practices, or ACIP, was meeting for the first time since Health Secretary Robert F. Kennedy Jr. abruptly replaced its 17 members with eight handpicked ones on June 11.

The committee generally discusses and votes on recommendations for specific vaccines. For this meeting, vaccines for COVID-19, human papillomavirus, influenza and other infectious diseases were on the schedule.

I’m a pharmacist and expert on drug information with 35 years of experience critically evaluating the safety and effectiveness of medications in clinical trials. No evidence supports the idea that thimerosal, used as a preservative in vaccines, is unsafe or carries any health risks.

What is thimerosal?

Thimerosal, also known as thiomersal, is a preservative that has been used in some drug products since the 1930s because it prevents contamination by killing microbes and preventing their growth.

In the human body, thimerosal is metabolized, or changed, to ethylmercury, an organic derivative of mercury. Studies in infants have shown that ethylmercury is quickly eliminated from the blood.

Even though thimerosal is no longer used in childhood vaccines, many parents still worry about whether it can harm their kids.

Ethylmercury is sometimes confused with methylmercury. Methylmercury is known to be toxic and is associated with many negative effects on brain development even at low exposure. Environmental researchers identified the neurotoxic effects of mercury in children in the 1970s, primarily resulting from exposure to methylmercury in fish. In the 1990s, the Environmental Protection Agency and the Food and Drug Administration established limits for maximum recommended exposure to methylmercury, especially for children, pregnant women and women of childbearing age.

Why is thimerosal controversial?

Fears about the safety of thimerosal in vaccines spread for two reasons.

First, in 1998, a now discredited report was published in a major medical journal called The Lancet. In it, a British doctor named Andrew Wakefield described eight children who developed autism after receiving the MMR vaccine, which protects against measles, mumps and rubella. However, the patients were not compared with a control group that was vaccinated, so it was impossible to draw conclusions about the vaccine’s effects. Also, the data report was later found to be falsified. And the MMR vaccine that children received in that report never contained thimerosal.

Second, the federal guidelines on exposure limits for the toxic substance methylmercury came out about the same time as the Wakefield study’s publication. During that period, autism was becoming more widely recognized as a developmental condition, and its rates of diagnosis were rising. People who believed Wakefield’s results conflated methylmercury and ethylmercury and promoted the unfounded idea that ethylmercury in vaccines from thimerosal were driving the rising rates of autism.

The Wakefield study was retracted in 2010, and Wakefield was found guilty of dishonesty and flouting ethics protocols by the U.K. General Medical Council, as well as stripped of his medical license. Subsequent studies have not shown a relationship between the MMR vaccine and autism, but despite the absence of evidence, the idea took hold and has proved difficult to dislodge.

Grumpy white baby giving side-eye to an older white male doctor about to administer a vaccine
The Wakefield study severely damaged many parents’ faith in the MMR vaccine, even though its results were eventually shown to be fraudulent.
Peter Dazeley/The Image Bank, Getty Images

Have scientists tested whether thimerosal is safe?

No unbiased research to date has identified toxicity caused by ethylmercury in vaccines or a link between the substance and autism or other developmental concerns – and not from lack of looking.

A 1999 review conducted by the Food and Drug Administration in response to federal guidelines on limiting mercury exposure found no evidence of harm from thimerosal as a vaccine preservative other than rare allergic reactions. Even so, as a precautionary measure in response to concerns about exposure to mercury in infants, the American Academy of Pediatrics and the U.S. Public Health Service issued a joint statement in 1999 recommending removal of thimerosal from vaccines.

At that time, just one childhood vaccine was available only in a version that contained thimerosal as an ingredient. This was a vaccine called DTP, for diphtheria, tetanus and pertussis. Other childhood vaccines were either available only in formulations without thimerosal or could be obtained in versions that did not contain it.

By 2001, U.S. manufacturers had removed thimerosal from almost all vaccines – and from all vaccines in the childhood vaccination schedule.

In 2004, the U.S. Institute of Medicine Immunization Safety Review Committee reviewed over 200 scientific studies and concluded there is no causal relationship between thimerosal-containing vaccines and autism. Additional well-conducted studies reviewed independently by the CDC and by the FDA did not find a link between thimerosal-containing vaccines and autism or neuropsychological delays.

How is thimerosal used today?

In the U.S., most vaccines are now available in single-dose vials or syringes. Thimerosal is found only in multi-dose vials that are used to supply vaccines for large-scale immunization efforts – specifically, in a small number of influenza vaccines. It is not added to modern childhood vaccines, and people who get a flu vaccine can avoid it by requesting a vaccine supplied in a single-dose vial or syringe.

Thimerosal is still used in vaccines in some other countries to ensure continued availability of necessary vaccines. The World Health Organization continues to affirm that there is no evidence of toxicity in infants, children or adults exposed to thimerosal-containing vaccines.

This article was updated to include ACIP’s vaccine recommendations.

The Conversation

Terri Levien does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. A preservative removed from childhood vaccines 20 years ago is still causing controversy today − a drug safety expert explains – https://theconversation.com/a-preservative-removed-from-childhood-vaccines-20-years-ago-is-still-causing-controversy-today-a-drug-safety-expert-explains-259442