Why power skills – formerly known as ‘soft skills’ – are the key to business success

Source: – By Sandra Sjoberg, Vice President and Dean, Academic Programs, Western Governors University School of Business

What does it take to lead through complexity, make tough decisions and still put people first? For me, the answer became clear during a defining moment early in my career – one that changed my path entirely.

Today I am a business-school educator, but I began my career in the corporate world. I faced a challenge so intense that it motivated me to go back to school and earn a Ph.D. so I could help others lead with greater purpose and humanity.

Back then, I was working for a multinational home goods company, and I was asked to play a role in closing a U.S. factory in the Midwest and moving its operations abroad. It was, by every business metric, the right economic decision. Without it, the company couldn’t stay competitive. Still, the move was fraught with emotional and ethical complexities.

Witnessing the toll on employees who lost their jobs, and the broader effects on their community, changed how I thought about business decision-making. I saw that technical skills alone aren’t enough. Effective leadership also requires emotional intelligence, ethical reasoning and human-centered thinking.

That experience was a turning point, leading me to higher education. I wanted to fulfill a greater purpose by equipping future business leaders with critical human-centric skills. And to do that, I needed to learn more about these skills – why they matter, how they shape outcomes, and how we can teach them more effectively.

Often called “soft skills” or “people skills,” these are also, more appropriately, referred to as “power skills” or “durable skills.” And they aren’t just nice to have. As my own experience shows and as research confirms, they are central to success in today’s business world.

Power skills: Underappreciated, yet in demand

Research on power skills dates back to at least 1918, when the Carnegie Foundation published A Study of Engineering Education. That report concluded that 85% of engineering professionals’ success came from having well-developed people skills, and only 15% was attributed to “hard skills.” These early findings helped shape our understanding of the value of nontechnical skills and traits.

Today, employers arguably value these skills more than ever. But while demand for these skills is growing across industries, there’s not enough supply. For example, nearly 7 in 10 U.S. employers plan to prioritize hiring candidates with “soft” or “power” skills, according to LinkedIn’s most recent Global Talent Trends report.

Yet 65% of employers cite soft skills as the top gap among new graduates, according to Coursera’s 2025 Micro-Credentials Impact Report. New hires are struggling in the areas of communication, active listening, resilience and adaptability, the survey found.

Power skills are transferable across roles, projects and industries, which makes them especially valuable to hiring managers. And research continues to show that these skills drive innovation, strengthen team dynamics and help organizations navigate uncertainty — key reasons why employers prioritize them.

Three power skills to prioritize

So what does it look like to lead with power skills? Here are three key areas that have shaped my own journey — and that I now help others develop:

Adaptability: Adaptability goes beyond simply accepting change. It’s the ability to think, feel and act effectively when the situation changes – which, in today’s business environment, is all the time.

Consider a company expanding into a new international market. To succeed, it must invest in cultural research, adapt its operations to regional norms and align with local regulations – demonstrating adaptability at both strategic and operational levels.

That’s why adaptability is one of the most in-demand skills among employers, according to a recent LinkedIn study. Adaptable workforces are better equipped to respond to shifting demands. And with the rise of artificial intelligence and rapid tech disruption, organizations need agile, resilient employees more than ever.

Empathy: As I learned firsthand during my time in the corporate world, empathy – or the ability to understand and respond to the feelings, perspectives and needs of others – is essential.

Empathy not only fosters trust and respect, but it also helps leaders make decisions that balance organizational goals with human needs. More broadly, empathetic leaders create inclusive environments and build stronger relationships.

At Western Governors University, we have an entire course titled “Empathy and Inclusive Collaboration,” which teaches skills in active listening, creating culturally safe environments and cultivating an inclusive mindset.

Inclusivity: Effective communication and teamwork consistently rank high as essential workforce skills. This is because organizations that excel in communication and collaboration are more likely to innovate, adapt to change and make informed decisions.

While managing a global transition, I saw how hard and necessary it was to listen across cultural lines, to foster collaboration across borders and departments. When teams collaborate well, they bring diverse perspectives that can foster creativity and efficiency. The ability to communicate openly and work together is crucial for navigating complex problems and driving organizational success.

The business landscape is evolving rapidly, and technical expertise alone is no longer enough to drive success. Power skills like adaptability, empathy and inclusivity are crucial, as both research and my own experiences have taught me. By prioritizing power skills, educators and businesses can better prepare leaders to navigate complexity, lead with purpose and thrive in a constantly changing world.

The Conversation

Sandra Sjoberg is affiliated with Western Governors University.
Sandra Sjoberg is a member of the industry association, American Marketing Association.
Sandra Sjoberg was a former employee at Amerock, a division of Newell Rubbermaid that, while not mentioned directly in the article, is the basis for the corporate experience shared in the article.

ref. Why power skills – formerly known as ‘soft skills’ – are the key to business success – https://theconversation.com/why-power-skills-formerly-known-as-soft-skills-are-the-key-to-business-success-257310

Checking in on New England’s fishing industry 25 Years after ‘The Perfect Storm’ hit movie theaters

Source: – By Stephanie Otts, Director of National Sea Grant Law Center, University of Mississippi

Filming ‘The Perfect Storm’ in Gloucester Harbor, Mass.
The Salem News Historic Photograph Collection, Salem State University Archives and Special Collections, CC BY

Twenty-five years ago, “The Perfect Storm” roared into movie theaters. The disaster flick, starring George Clooney and Mark Wahlberg, was a riveting, fictionalized account of commercial swordfishing in New England and a crew who went down in a violent storm.

The anniversary of the film’s release, on June 30, 2000, provides an opportunity to reflect on the real-life changes to New England’s commercial fishing industry.

Fishing was once more open to all

In the true story behind the movie, six men lost their lives in late October 1991 when the commercial swordfishing vessel Andrea Gail disappeared in a fierce storm in the North Atlantic as it was headed home to Gloucester, Massachusetts.

At the time, and until very recently, almost all commercial fisheries were open access, meaning there were no restrictions on who could fish.

There were permit requirements and regulations about where, when and how you could fish, but anyone with the means to purchase a boat and associated permits, gear, bait and fuel could enter the fishery. Eight regional councils established under a 1976 federal law to manage fisheries around the U.S. determined how many fish could be harvested prior to the start of each fishing season.

People and barrels of fish fill a wharf area in a historical black-and-white image.
Fishing has been an integral part of coastal New England culture since its towns were established. In this 1899 photo, a New England community weighs and packs mackerel.
Charles Stevenson/Freshwater and Marine Image Bank

Fishing started when the season opened and continued until the catch limit was reached. In some fisheries, this resulted in a “race to the fish” or a “derby,” where vessels competed aggressively to harvest the available catch in short amounts of time. The limit could be reached in a single day, as happened in the Pacific halibut fishery in the late 1980s.

By the 1990s, however, open access systems were coming under increased criticism from economists as concerns about overfishing rose.

The fish catch peaked in New England in 1987 and would remain far above what the fish population could sustain for two more decades. Years of overfishing led to the collapse of fish stocks, including North Atlantic cod in 1992 and Pacific sardine in 2015.

As populations declined, managers responded by cutting catch limits to allow more fish to survive and reproduce. Fishing seasons were shortened, as it took less time for the fleets to harvest the allowed catch. It became increasingly hard for fishermen to catch enough fish to earn a living.

Saving fisheries changed the industry

In the early 2000s, as these economic and environmental challenges grew, fisheries managers started limiting access. Instead of allowing anyone to fish, only vessels or individuals meeting certain eligibility requirements would have the right to fish.

The most common method of limiting access in the U.S. is through limited entry permits, initially awarded to individuals or vessels based on previous participation or success in the fishery. Another approach is to assign individual harvest quotas or “catch shares” to permit holders, limiting how much each boat can bring in.

In 2007, Congress amended the 1976 Magnuson-Stevens Fishery Conservation and Management Act to promote the use of limited access programs in U.S. fisheries.

Three fishing vessels, side by side, in New Bedford Harbor
Ships in the fleet out of New Bedford, Mass.
Henry Zbyszynski/Flickr, CC BY

Today, limited access is common, and there are positive signs that the management change is helping achieve the law’s environmental goal of preventing overfishing. Since 2000, the populations of 50 major fishing stocks have been rebuilt, meaning they have recovered to a level that can once again support fishing.

I’ve been following the changes as a lawyer focused on ocean and coastal issues, and I see much work still to be done.

Forty fish stocks are currently being managed under rebuilding plans that limit catch to allow the stock to grow, including Atlantic cod, which has struggled to recover due to a complex combination of factors, including climatic changes.

The lingering effect on communities today

While many fish stocks have recovered, the effort came at an economic cost to many individual fishermen. The limited-access Northeast groundfish fishery, which includes Atlantic cod, haddock and flounder, shed nearly 800 crew positions between 2007 and 2015.

The loss of jobs and revenue from fishing impacts individual family income and relationships, strains other businesses in fishing communities, and affects those communities’ overall identity and resilience, as illustrated by a recent economic snapshot of the Alaska seafood industry.

When original limited-access permit holders leave the business – for economic, personal or other reasons – their permits are either terminated or sold to other eligible permit holders, leading to fewer active vessels in the fleet. As a result, the number of vessels fishing for groundfish has declined from 719 in 2007 to 194 in 2023, meaning fewer jobs.

A fisherman wearing thick gloves lifts a tray of fish, with boats in the background.
A fisherman unloads a portion of his catch for the day of 300 pounds of groundfish, including flounder, in January 2006 in Gloucester, Mass.
AP Photo/Lisa Poole

Because of their scarcity, limited-access permits can cost upward of US$500,000, which is often beyond the financial means of a small businesses or a young person seeking to enter the industry. The high prices may also lead retiring fishermen to sell their permits, as opposed to passing them along with the vessels to the next generation.

These economic forces have significantly altered the fishing industry, leading to more corporate and investor ownership, rather than the family-owned operations that were more common in the Andrea Gail’s time.

Similar to the experience of small family farms, fishing captains and crews are being pushed into corporate arrangements that reduce their autonomy and revenues.

Consolidation can threaten the future of entire fleets, as New Bedford, Massachusetts, saw when Blue Harvest Fisheries, backed by a private equity firm, bought up vessels and other assets and then declared bankruptcy a few years later, leaving a smaller fleet and some local business and fishermen unpaid for their work. A company with local connections bought eight vessels from Blue Harvest along with 48 state and federal permits the company held.

New challenges and unchanging risks

While there are signs of recovery for New England’s fisheries, challenges continue.

Warming water temperatures have shifted the distribution of some species, affecting where and when fish are harvested. For example, lobsters have moved north toward Canada. When vessels need to travel farther to find fish, that increases fuel and supply costs and time away from home.

Fisheries managers will need to continue to adapt to keep New England’s fisheries healthy and productive.

One thing that, unfortunately, hasn’t changed is the dangerous nature of the occupation. Between 2000 and 2019, 414 fishermen died in 245 disasters.

The Conversation

Stephanie Otts receives funding from the NOAA National Sea Grant College Program through the U.S. Department of Commerce. Previous support for fisheries management legal research provided by The Nature Conservancy.

ref. Checking in on New England’s fishing industry 25 Years after ‘The Perfect Storm’ hit movie theaters – https://theconversation.com/checking-in-on-new-englands-fishing-industry-25-years-after-the-perfect-storm-hit-movie-theaters-255076

Blocking exports and raising tariffs is a bad defense against industrial cyber espionage, study shows

Source: – By William Akoto, Assistant Professor of Global Security, American University

Cutting off China’s access to advanced U.S. chips is likely to motivate Chinese cyber espionage. kritsapong jieantaratip/iStock via Getty Images

The United States is trying to decouple its economy from rivals like China. Efforts toward this include policymakers raising tariffs on Chinese goods, blocking exports of advanced technology and offering subsidies to boost American manufacturing. The goal is to reduce reliance on China for critical products in hopes that this will also protect U.S. intellectual property from theft.

The idea that decoupling will help stem state-sponsored cyber-economic espionage has become a key justification for these measures. For instance, then-U.S. Trade Representative Katherine Tai framed the continuation of China-specific tariffs as serving the “statutory goal to stop [China’s] harmful … cyber intrusions and cyber theft.” Early tariff rounds during the first Trump administration were likewise framed as forcing Beijing to confront “deeply entrenched” theft of U.S. intellectual property.

This push to “onshore” key industries is driven by very real concerns. By some estimates, theft of U.S. trade secrets, often through hacking – costs the American economy hundreds of billions of dollars per year. In that light, decoupling is a defensive economic shield – a way to keep vital technology out of an adversary’s reach.

But will decoupling and cutting trade ties truly make America’s innovations safer from prying eyes? I’m a political scientist who studies state-sponsored cyber espionage, and my research suggests that the answer is a definitive no. Indeed, it might actually have the opposite effect.

To understand why, it helps to look at what really drives state-sponsored hacking.

Rivalry, not reliance

Intuitively, you might think a country is most tempted to steal secrets from a nation it depends on. For example, if Country A must import jet engines or microchips from Country B, Country A might try to hack Country B’s companies to copy that technology and become self-sufficient. This is the industrial dependence theory of cyber theft.

There is some truth to this motive. If your economy needs what another country produces, stealing that know-how can boost your own industries and reduce reliance. However, in a recent study, I show that a more powerful predictor of cyber espionage is industrial similarity. Countries with overlapping advanced industries such as aerospace, electronics or pharmaceuticals are the ones most likely to target each other with cyberattacks.

Why would having similar industries spur more spying? The reason is competition. If two nations both specialize in cutting-edge sectors, each has a lot to gain by stealing the other’s innovations.

If you’re a tech powerhouse, you have valuable secrets worth stealing, and you have the capability and motivation to steal others’ secrets. In essence, simply trading with a rival isn’t the core issue. Rather, it’s the underlying technological rivalry that fuels espionage.

For example, a cyberattack in 2012 targeted SolarWorld, a U.S. solar panel manufacturer, and the perpetrators stole the company’s trade secrets. Chinese solar companies then developed competing products based on the stolen designs, costing SolarWorld millions in lost revenue. This is a classic example of industrial similarity at work. China was building its own solar industry, so it hacked a U.S. rival to leapfrog in technology.

China has made major investments in its cyber-espionage capabilities.

Boosting trade barriers can fan the flames

Crucially, cutting trade ties doesn’t remove this rivalry. If anything, decoupling might intensify it. When the U.S. and China exchange tariff blows or cut off tech transfers, it doesn’t make China give up – it likely pushes Chinese intelligence agencies to work even harder to steal what they can’t buy.

This dynamic isn’t unique to China. Any country that suddenly loses access to an important technology may turn to espionage as Plan B.

History provides examples. When South Africa was isolated by sanctions in the 1980s, it covertly obtained nuclear weapons technology. Similarly, when Israel faced arms embargoes in the 1960s, it engaged in clandestine efforts to get military technology. Isolation can breed desperation, and hacking is a low-cost, high-reward tool for the desperate.

If decoupling won’t end cyber espionage, what will?

There’s no easy fix for state-sponsored hacking as long as countries remain locked in high-tech competition. However, there are steps that can mitigate the damage and perhaps dial down the frequency of these attacks.

One is investing in cyber defense. Just as a homeowner adds locks and alarms after a burglary, companies and governments should continually strengthen their cyber defenses. Assuming that espionage attempts are likely to happen is key. Advanced network monitoring, employee training against phishing, and robust encryption can make it much harder for hackers to succeed, even if they keep trying.

Another is building resilience and redundancy. If you know that some secrets might get stolen, plan for it. Businesses can shorten product development cycles and innovate faster so that even if a rival copies today’s tech, you’re already moving on to the next generation. Staying ahead of thieves is a form of defense, too.

Ultimately, rather than viewing tariffs and export bans as silver bullets against espionage, U.S. leaders and industry might be safer focusing on resilience and stress-testing cybersecurity firms. Make it harder for adversaries to steal secrets, and less rewarding even if they do.

The Conversation

William Akoto does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Blocking exports and raising tariffs is a bad defense against industrial cyber espionage, study shows – https://theconversation.com/blocking-exports-and-raising-tariffs-is-a-bad-defense-against-industrial-cyber-espionage-study-shows-258243

What is reconciliation − the legislative shortcut Republicans are using to push through their ‘big, beautiful bill’?

Source: – By Linda J. Bilmes, Daniel Patrick Moynihan Senior Lecturer in Public Policy and Public Finance, Harvard Kennedy School

Senate Majority Leader John Thune speaks with reporters about the reconciliation process to advance President Donald Trump’s spending and tax bill on June 3, 2025. AP Photo/J. Scott Applewhite

The word “reconciliation” sounds benign, even harmonious.

But in Washington, D.C., reconciliation refers to a potent legislative shortcut that allows the party in power to avoid opposition and enact sweeping changes to taxes and spending with a simple majority vote. Democrats used the process to pass the Inflation Reduction Act in 2022. Reconciliation helped Republicans pass large tax cuts in 2017.

Reconciliation is also at the heart of the current budget debate, as Senate Republicans rush to advance their version of the “One Big Beautiful Bill Act,” also known by its acronym OBBBA, which passed the House in May 2025.

I served as assistant secretary of Commerce for management and budget during the Clinton administration, when my colleagues and I helped forge bipartisan legislation that balanced the federal budget and produced surpluses over four years, from 1998 to 2001. We were even able to pay off some debt.

But since 2001, the country’s fiscal situation has deteriorated significantly. And the reconciliation process has strayed from its original purpose as a mechanism to promote sound fiscal policy. Instead, it is now used to pass partisan legislation, often without regard to its economic impact on future generations of Americans.

Reconciliation 101

The reconciliation process was created by the Congressional Budget Act of 1974, which was overwhelmingly supported by both parties. It was designed to align policy goals with budget targets to help rein in deficits.

The rules specify that a bill using the reconciliation process must pertain directly to budgetary or fiscal matters, cannot change Social Security, Medicare or the budget process itself, or deliberately extend deficits beyond a 10-year window. As part of the process, the parliamentarian goes through each element of the bill and determines whether it meets the requirements, removing any that don’t.

This caused the One Big Beautiful Bill Act to hit a snag in the Senate on June 25, 2025, after the parliamentarian ruled several major parts of it couldn’t be included as written, such as an effort to crack down on efforts by states to get more Medicaid funds and a limit on student debt repayment options.

In the Senate, reconciliation has special procedural advantages. Debate is limited to 20 hours. Conveniently for the party in power, the final bill can pass with a simple majority of 51 votes. This avoids the usual 60-vote threshold needed to overcome a filibuster.

Over its 50-year history, 23 reconciliation bills have become law.

Reconciliation on rise as budget process breaks down

Over time, reconciliation has become the dominant method for enacting major tax and spending legislation, as the regular congressional budget process has broken down.

Since 1974, there have been multiple government shutdowns, near-shutdowns and short-term, stopgap “continual resolutions” instead of annual budgets, accompanied by rising deficits and national debt.

With few other tools at its disposal, Congress has used reconciliation to push through many pieces of major economic legislation, including the 2001 and 2003 tax cuts under President George W. Bush, the 2017 tax cuts during President Donald Trump’s first term, and the American Rescue Plan in 2021 and the Inflation Reduction Act in 2022 during the Biden administration.

However, reconciliation has significant flaws. Because debate is limited, senators often vote on bills over 1,000 pages long with little time to review the details. And once tax cuts are enacted under reconciliation, it is devilishly hard to get rid of them.

Given the compressed timelines and lack of transparency inherent in such huge, messy spending bills, it is fairly easy for lawmakers to slip in earmarks, tax loopholes and other extraneous items that that don’t get removed by the parliamentarian.

a Black man points the ceiling as he stands in front of a lectern and two poster boards
House Minority Leader Hakeem Jeffries argues Republicans’ spending and tax bill will ‘explode the deficit.’
AP Photo/J. Scott Applewhite

What’s in the bill?

At the heart of the One Big Beautiful Bill Act, passed by the House, is an extension of President Trump’s tax cuts from his first term, which would otherwise expire at the end of 2025, according to the procedural rules for reconciliation.

But it also includes multiple new tax cuts – such as an end to taxes on overtime and tips and lower estate taxes – introduces new Medicaid work requirements and repeals various energy credits. In line with the Trump administration’s policies, the bill slashes federal funding for education, Medicaid, public housing, environmental programs, scientific research and some national park and public land protection programs. It also boosts defense spending.

The bill would sharply worsen the nation’s fiscal outlook, according to analyses by the nonpartisan Congressional Budget Office and other organizations.

Currently, the national debt exceeds US$36 trillion, according to the U.S. Treasury, and net interest payments account for some 16% of federal revenue, based on the Congressional Budget Office’s projections for 2025.

In its analysis, the Congressional Budget Office – which was also created by the 1974 act – said the House-passed version would increase deficits by more than $3.1 trillion over the next decade. The overwhelming share of this cost comes from the permanent extension of individual tax cuts initially enacted in 2017.

According to the Congressional Budget Office’s analysis, by 2035 households earning at least $1 million would receive an average annual tax cut of about $45,000. Most middle- and lower-income households would receive a cut of less than $500 per year, if anything.

The costs of reconciliation

A number of Senate Republicans have questioned some aspects of the reconciliation package. Since they hold only a 53-47 majority, and with all Democrats expected to vote “no,” they need to use reconciliation to pass their version.

Although it differs from the House version in many ways, the Senate version still favors tax cuts for high-income households and large corporations.

Senate Republicans also employ a flawed accounting gimmick to minimize its apparent cost. It assumes the 2017 Trump tax cuts, which are set to expire, have already been extended and embeds that assumption into the budget baseline.

This makes extending the tax cuts appear costless, even though it would grow the debt substantially. The move violates normal scorekeeping conventions and misleads the public. Honest accounting would show that the Senate plan would add to the debt about $500 billion more than the House version.

Abusing the process

Lots of wrangling and changes are expected before the Senate is able to pass its version. After that, the House and Senate will need to resolve their differences in a conference committee of Republicans from each house of Congress.

Once they agree on a final version, each house votes again – and the Senate version will still need to meet the terms of reconciliation in order to pass with a majority vote. President Trump is pressuring Congress to deliver the bill to his desk before he goes on July Fourth vacation.

In my view, while reconciliation remains a powerful budgetary tool, its current use represents a fundamental inversion of its original purpose. Americans deserve an honest debate about trade-offs, rather than more debt in disguise. Some estimates of the fiscal impact of the Senate’s version of the bill are as high as $3.8 trillion over a decade. Simply waving a magic accounting wand won’t make them go away.

This article was updated to include a Senate parliamentarian ruling about several provisions of the Republican bill.

The Conversation

Linda J. Bilmes served as Deputy Assistant Secretary of the US Department of Commerce from 1997-1998 and as CFO and Assistant Secretary for Management, Budget and Administration from 1999-2001.

ref. What is reconciliation − the legislative shortcut Republicans are using to push through their ‘big, beautiful bill’? – https://theconversation.com/what-is-reconciliation-the-legislative-shortcut-republicans-are-using-to-push-through-their-big-beautiful-bill-255487

Why energy markets fluctuate during an international crisis

Source: – By Skip York, Nonresident Fellow in Energy and Global Oil, Baker Institute for Public Policy, Rice University

Stock and commodities traders found themselves dealing with various price swings as energy markets responded to Israeli and U.S. attacks on Iran. Timothy A. Clary/AFP via Getty Imagesf

Global energy markets, such as those for oil, gas and coal, tend to be sensitive to a wide range of world events – especially when there is some sort of crisis. Having worked in the energy industry for over 30 years, I’ve seen how war, political instability, pandemics and economic sanctions can significantly disrupt energy markets and impede them from functioning efficiently.

A look at the basics

First, consider the economic fundamentals of supply and demand. The risk most people imagine in the current crisis between Israel, the U.S. and Iran is that Iran, which is itself a major oil-producing country, might suddenly expand the conflict by threatening the ability of neighboring countries to supply oil to the world.

Oil wells, refineries, pipelines and shipping lanes are the backbone of energy markets. They can be vulnerable during a crisis: Whether there is deliberate sabotage or collateral damage from military action, energy infrastructure often takes a hit.

For instance, after Saddam Hussein invaded Kuwait in August 1990, Iraqi forces placed explosive charges on Kuwaiti oil wells and began detonating them in January 1991. It took months for all the resulting fires to be put out, and millions of barrels of oil and hundreds of millions of cubic meters of natural gas were released into the environment – rather than being sold and used productively somewhere around the world.

Scenes of Kuwaiti life during and after the Gulf War of 1990 and 1991 include images of oil wells burning as a result of Iraqi sabotage.

Logistics can mess markets up too. For instance, closing critical maritime routes like the Strait of Hormuz or the Suez Canal can cause transportation delays.

Whether supply is lost from decreased production or blocked transportation routes, the effect is less oil available to the market, which not only causes prices to rise in general, but it also makes them more volatile – tending to change more frequently and by larger amounts.

On the flip side, demand can also shift radically. During the 1990-1991 Gulf War, demand rose: U.S. forces alone used more than 2 billion gallons of fuel, according to an Army analysis. By contrast, during the COVID-19 pandemic, industries shut down, travel came to a halt and energy demand plummeted.

When crisis looms, countries and companies often start stockpiling oil and other raw materials rather than buying only what they need right now. That creates even more imbalance, resulting in price volatility that leaves everyone, both consumers and producers, with a headache.

Regional considerations

In addition to uncertainties around market fundamentals, it’s important to note that many of the world’s energy reserves are located in regions that have not been models of stability. In the Middle East, wars, revolutions and diplomatic disputes there can raise concerns about supply, demand or both.

Those worries send shock waves through the world’s energy markets. It’s like walking on a tightrope: One wrong move – or even the perception of a misstep – can make the market wobble.

Governments’ economic sanctions, such as those restricting trade with Iran, Russia or Venezuela, can distort production and investment decisions and disrupt trade flows. Sometimes markets react even before sanctions are officially in place: Just the rumor of a possible embargo can cause prices to spike as buyers scramble to secure resources.

In 2008, for example, India and Vietnam imposed rice export bans, and rumors of additional restrictions fueled panic buying and nearly doubled prices in months.

In those scrambles, the role of investor speculation enters the picture. Energy commodities, such as oil and gas, aren’t just physical resources; they’re also traded as financial assets like stocks and bonds. During uncertain times, traders don’t wait around for actual changes in supply and demand. They react to news and forecasts, sometimes in large groups, which can shift the market just with the actions that result from their fears or hopes.

The events on June 22, 2025, are a good example of how this dynamic works. The Iranian parliament passed a resolution authorizing the country’s Supreme Council to close the Strait of Hormuz. Immediately, oil prices started rising, even though the strait was still open, with oil tankers steaming through unimpeded.

The next day, Iran launched a missile strike on Qatar, but coordinated in advance with Qatari officials to minimize damage and casualties. Traders and analysts perceived the action as a de-escalatory signal and anticipated that the Supreme Council was not going to close the strait. So prices started to fall.

It was a price roller coaster, fueled by speculation rather than reality. And computer algorithms and artificial intelligence, which assist in making automated trades, only add to the chaos of price changes.

Shipping activity in the Persian Gulf and the Strait of Hormuz decreased after Israel’s attacks on Iranian nuclear facilities.

A broader look

International crises can also cause wider changes in countries’ economies – or the global economy as a whole – which in turn affect the energy market.

If a crisis sparks a recession, rising inflation or high unemployment, those tend to cause people and businesses to use less energy. When the underlying situation stabilizes, recovery efforts can mean energy consumption resumes. But it’s like a pendulum swinging back and forth, with energy markets caught in the middle.

Renewable energy is not immune to international crisis and chaos. The supply is less affected by market forces: The amount of available sunlight and wind isn’t tied to geopolitical relations. But overall economic conditions still affect demand, and a crisis can disrupt the supply chains for the equipment needed to harness renewable energy, like solar panels and wind turbines.

It’s no wonder energy markets are so jittery during international crises. A mix of imbalances between supply and demand, vulnerable infrastructure, political tensions, corporate worries and speculative trading all weave together into a complex web of volatility.

For policymakers, investors and consumers, understanding these dynamics is key to navigating the ups and downs of energy markets in a crisis-prone world. The solutions aren’t simple, but being informed is the first step toward stability.

The Conversation

Skip York is a nonresident fellow for Global Oil and Energy with the Center for Energy Studies at Rice University’s Baker Institute for Public Policy. He also is the Chief Energy Strategist at Turner Mason & Company, an energy consulting firm.

ref. Why energy markets fluctuate during an international crisis – https://theconversation.com/why-energy-markets-fluctuate-during-an-international-crisis-259839

To spur the construction of affordable, resilient homes, the future is concrete

Source: – By Pablo Moyano Fernández, Assistant Professor of Architecture, Washington University in St. Louis

A modular, precast system of concrete ‘rings’ can be connected in different ways to build a range of models of energy-efficient homes. Pablo Moyano Fernández, CC BY-SA

Wood is, by far, the most common material used in the U.S. for single-family home construction.

But wood construction isn’t engineered for long-term durability, and it often underperforms, particularly in the face of increasingly common extreme weather events.

In response to these challenges, I believe mass-produced concrete homes can offer affordable, resilient housing in the U.S. By leveraging the latest innovations of the precast concrete industry, this type of homebuilding can meet the needs of a changing world.

Wood’s rise to power

Over 90% of the new homes built in the U.S. rely on wood framing.

Wood has deep historical roots as a building material in the U.S., dating back to the earliest European settlers who constructed shelters using the abundant native timber. One of the most recognizable typologies was the log cabin, built from large tree trunks notched at the corners for structural stability.

A mother holds her child in the front doorway of their log cabin home.
Log cabins were popular in the U.S. during the 18th and 19th centuries.
Heritage Art/Heritage Images via Getty Images

In the 1830s, wood construction underwent a significant shift with the introduction of balloon framing. This system used standardized, sawed lumber and mass-produced nails, allowing much smaller wood components to replace the earlier heavy timber frames. It could be assembled by unskilled labor using simple tools, making it both accessible and economical.

In the early 20th century, balloon framing evolved into platform framing, which became the dominant method. By using shorter lumber lengths, platform framing allowed each floor to be built as a separate working platform, simplifying construction and improving its efficiency.

The proliferation and evolution of wood construction helped shape the architectural and cultural identity of the nation. For centuries, wood-framed houses have defined the American idea of home – so much so that, even today, when Americans imagine a house, they typically envision one built of wood.

A row of half-constructed homes surrounded by piles of dirt.
A suburban housing development from the 1950s being built with platform framing.
H. Armstrong Roberts/ClassicStock via Getty Images

Today, light-frame wood construction dominates the U.S. residential market.

Wood is relatively affordable and readily available, offering a cost-effective solution for homebuilding. Contractors are familiar with wood construction techniques. In addition, building codes and regulations have long been tailored to wood-frame systems, further reinforcing their prevalence in the housing industry.

Despite its advantages, wood light-frame construction presents several important limitations. Wood is vulnerable to fire. And in hurricane- and tornado-prone regions, wood-framed homes can be damaged or destroyed.

Wood is also highly susceptible to water-related issues, such as swelling, warping and structural deterioration caused by leaks or flooding. Vulnerability to termites, mold, rot and mildew further compromise the longevity and safety of wood-framed structures, especially in humid or poorly ventilated environments.

The case for concrete

Meanwhile, concrete has revolutionized architecture and engineering over the past century. In my academic work, I’ve studied, written and taught about the material’s many advantages.

The material offers unmatched strength and durability, while also allowing design flexibility and versatility. It’s low-cost and low-maintenance, and it has high thermal mass properties, which refers to the material’s ability to absorb and store heat during the day, and slowly release it during the cooler nights. This can lower heating and cooling costs.

Properly designed concrete enclosures offer exceptional performance against a wide range of hazards. Concrete can withstand fire, flooding, mold, insect infestation, earthquakes, hail, hurricanes and tornadoes.

It’s commonly used for home construction in many parts of the world, such as Europe, Japan, Mexico, Brazil and Argentina, as well as India and other parts of Southeast Asia.

However, despite their multiple benefits, concrete single-family homes are rare in the U.S.

That’s because most concrete structures are built using a process called cast-in-place. In this technique, the concrete is formed and poured directly at the construction site. The method relies on built-in-place molds. After the concrete is cast and cured over several days, the formwork is removed.

This process is labor-intensive and time-consuming, and it often produces considerable waste. This is particularly an issue in the U.S., where labor is more expensive than in other parts of the world. The material and labor cost can be as high as 35% to 60% of the total construction cost.

Portland cement, the binding agent in concrete, requires significant energy to produce, resulting in considerable carbon dioxide emissions. However, this environmental cost is often offset by concrete’s durability and long service life.

Concrete’s design flexibility and structural integrity make it particularly effective for large-scale structures. So in the U.S., you’ll see it used for large commercial buildings, skyscrapers and most highways, bridges, dams and other critical infrastructure projects.

But when it comes to single-family homes, cast-in-place concrete poses challenges to contractors. There are the higher initial construction costs, along with a lack of subcontractor expertise. For these reasons, most builders and contractors stick with what they know: the wood frame.

A new model for home construction

Precast concrete, however, offers a promising alternative.

Unlike cast-in-place concrete, precast systems allow for off-site manufacturing under controlled conditions. This improves the quality of the structure, while also reducing waste and labor.

The CRETE House, a prototype I worked on in 2017 alongside a team at Washington University in St. Louis, showed the advantages of a precast home construction.

To build the precast concrete home, we used ultra-high-performance concrete, one of the latest advances in the concrete industry. Compared with conventional concrete, it’s about six times stronger, virtually impermeable and more resistant to freeze-thaw cycles. Ultra-high-performance concrete can last several hundred years.

The strength of the CRETE House was tested by shooting a piece of wood at 120 mph (193 kph) to simulate flying debris from an F5 tornado. It was unable to breach the wall, which was only 2 inches (5.1 centimeters) thick.

The wall of the CRETE House was able to withstand a piece of wood fired at 120 mph (193 kph).

Building on the success of the CRETE House, I designed the Compact House as a solution for affordable, resilient housing. The house consists of a modular, precast concrete system of “rings” that can be connected to form the entire structure – floors, walls and roofs – creating airtight, energy-efficient homes. A series of different rings can be chosen from a catalog to deliver different models that can range in size from 270 to 990 square feet (25 to 84 square meters).

The precast rings can be transported on flatbed trailers and assembled into a unit in a single day, drastically reducing on-site labor, time and cost.

Since they’re built using durable concrete forms, the house can be easily mass-produced. When precast concrete homes are mass-produced, the cost can be competitive with traditional wood-framed homes. Furthermore, the homes are designed to last far beyond 100 years – much longer than typical wood structures – while significantly lowering utility bills, maintenance expenses and insurance premiums.

The project is also envisioned as an open-source design. This means that the molds – which are expensive – are available for any precast producer to use and modify.

A computer graphic showing a prototype of a small, concrete home.
The Compact House is made using ultra-high-performance concrete.
Pablo Moyano Fernández, CC BY-SA

Leveraging a network that’s already in place

Two key limitations of precast concrete construction are the size and weight of the components and the distance to the project site.

Precast elements must comply with standard transportation regulations, which impose restrictions on both size and weight in order to pass under bridges and prevent road damage. As a result, components are typically limited to dimensions that can be safely and legally transported by truck. Each of the Compact House’s pieces are small enough to be transported in standard trailers.

Additionally, transportation costs become a major factor beyond a certain range. In general, the practical delivery radius from a precast plant to a construction site is 500 miles (805 kilometers). Anything beyond that becomes economically unfeasible.

However, the infrastructure to build precast concrete homes is already largely in place. Since precast concrete is often used for office buildings, schools, parking complexes and large apartments buildings, there’s already an extensive national network of manufacturing plants capable of producing and delivering components within that 500-mile radius.

There are other approaches to build homes with concrete: Homes can use concrete masonry units, which are similar to cinder blocks. This is a common technique around the world. Insulated concrete forms involve rigid foam blocks that are stacked like Lego bricks and are then filled with poured concrete, creating a structure with built-in insulation. And there’s even 3D-printed concrete, a rapidly evolving technology that is in its early stages of development.

However, none of these use precast concrete modules – the rings in my prototypes – and therefore require substantially longer on-site time and labor.

To me, precast concrete homes offer a compelling vision for the future of affordable housing. They signal a generational shift away from short-term construction and toward long-term value – redefining what it means to build for resilience, efficiency and equity in housing.

A bird's-eye view of a computer-generated neighborhood featuring plots of land with multiple concrete homes located on them.
An image of North St. Louis, taken from Google Earth, showing how vacant land can be repurposed using precast concrete homes.
Pablo Moyano Fernández, CC BY-SA

This article is part of a series centered on envisioning ways to deal with the housing crisis.

The Conversation

Pablo Moyano Fernández does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. To spur the construction of affordable, resilient homes, the future is concrete – https://theconversation.com/to-spur-the-construction-of-affordable-resilient-homes-the-future-is-concrete-254561

3D-printed model of a 500-year-old prosthetic hand hints at life of a Renaissance amputee

Source: – By Heidi Hausse, Associate Professor of History, Auburn University

Technology is more than just mechanisms and design — it’s ultimately about people.
Adriene Simon/College of Liberal Arts, Auburn University, CC BY-SA

To think about an artificial limb is to think about a person. It’s an object of touch and motion made to be used, one that attaches to the body and interacts with its user’s world.

Historical artifacts of prosthetic limbs are far removed from this lived context. Their users are gone. They are damaged – deteriorated by time and exposure to the elements. They are motionless, kept on display or in museum storage.

Yet, such artifacts are rare direct sources into the lives of historical amputees. We focus on the tools amputees used in 16th- and 17th-century Europe. There are few records written from amputees’ perspectives at that time, and those that exist say little about what everyday life with a prosthesis was like.

Engineering offers historians new tools to examine physical evidence. This is particularly important for the study of early modern mechanical hands, a new kind of prosthetic technology that appeared at the turn of the 16th century. Most of the artifacts are of unknown provenance. Many work only partially and some not at all. Their practical functions remain a mystery.

But computer-aided design software can help scholars reconstruct the artifacts’ internal mechanisms. This, in turn, helps us understand how the objects once moved.

Even more exciting, 3D printing lets scholars create physical models. Rather than imagining how a Renaissance prosthesis worked, scholars can physically test one. It’s a form of investigation that opens new possibilities for exploring the development of prosthetic technology and user experience through the centuries. It creates a trail of breadcrumbs that can bring us closer to the everyday experiences of premodern amputees.

But what does this work, which brings together two very different fields, look like in action?

What follows is a glimpse into our experience of collaboration on a team of historians and engineers, told through the story of one week. Working together, we shared a model of a 16th-century prosthesis with the public and learned a lesson about humans and technology in the process.

A historian encounters a broken model

THE HISTORIAN: On a cloudy day in late March, I walked into the University of Alabama Birmingham’s Center for Teaching and Learning holding a weatherproof case and brimming with excitement. Nestled within the case’s foam inserts was a functioning 3D-printed model of a 500-year-old prosthetic hand.

Fifteen minutes later, it broke.

Mechanical hand with plastic orange fingers extending from a plastic gray palm and wrist
This 3D-printed model of a 16th-century hand prosthesis has working mechanisms.
Heidi Hausse, CC BY-SA

For two years, my team of historians and engineers at Auburn University had worked tirelessly to turn an idea – recreating the mechanisms of a 16th-century artifact from Germany – into reality. The original iron prosthesis, the Kassel Hand, is one of approximately 35 from Renaissance Europe known today.

As an early modern historian who studies these artifacts, I work with a mechanical engineer, Chad Rose, to find new ways to explore them. The Kassel Hand is our case study. Our goal is to learn more about the life of the unknown person who used this artifact 500 years ago.

Using 3D-printed models, we’ve run experiments to test what kinds of activities its user could have performed with it. We modeled in inexpensive polylactic acid – plastic – to make this fragile artifact accessible to anyone with a consumer-grade 3D printer. But before sharing our files with the public, we needed to see how the model fared when others handled it.

An invitation to guest lecture on our experiments in Birmingham was our opportunity to do just that.

We brought two models. The main release lever broke first in one and then the other. This lever has an interior triangular plate connected to a thin rod that juts out of the wrist like a trigger. After pressing the fingers into a locked position, pulling the trigger is the only way to free them. If it breaks, the fingers become stuck.

Close-up of the interior mechanism of a 3D-printed prosthetic, the broken lever raised straight up
The thin rod of the main release lever snapped in this model.
Heidi Hausse, CC BY-SA

I was baffled. During testing, the model had lifted a 20-pound simulation of a chest lid by its fingertips. Yet, the first time we shared it with a general audience, a mechanism that had never broken in testing simply snapped.

Was it a printing error? Material defect? Design flaw?

We consulted our Hand Whisperer: our lead student engineer whose feel for how the model works appears at times preternatural.

An engineer becomes a hand whisperer

THE ENGINEER: I was sitting at my desk in Auburn’s mechanical engineering 3D print lab when I heard the news.

As a mechanical engineering graduate student concentrating on additive manufacturing, commonly known as 3D printing, I explore how to use this technology to reconstruct historical mechanisms. Over the two years I’ve worked on this project, I’ve come to know the Kassel Hand model well. As we fine-tuned designs, I’ve created and edited its computer-aided design files – the digital 3D constructions of the model – and printed and assembled its parts countless times.

Computer illustration of open hand model
This view of the computer-aided design file of a strengthened version of the model, which includes ribs and fillets to reinforce the plastic material, highlights the main release lever in orange.
Peden Jones, CC BY-SA

Examining parts midassembly is a crucial checkpoint for our prototypes. This quality control catches, corrects and prevents any defects, such as misprinted or damaged parts. It’s crucial for creating consistent and repeatable experiments. A new model version or component change never leaves the lab without passing rigorous inspection. This process means there are ways this model has behaved over time that the rest of the team has never seen. But I have.

So when I heard the release lever had broken in Birmingham, it was just another Thursday. While it had never snapped when we tested the model on people, I’d seen it break plenty of times while performing checks on components.

Disassembled hand model
Our model reconstructs the Kassel Hand’s original metal mechanisms in plastic.
Heidi Hausse, CC BY-SA

After all, the model is made from relatively weak polylactic acid. Perhaps the most difficult part of our work is making a plastic model as durable as possible while keeping it visually consistent with the 500-year-old original. The iron rod of the artifact’s lever can handle more force than our plastic version, at least five times the yield strength.

I suspected the lever had snapped because people pulled the trigger too far back and too quickly. The challenge, then, was to prevent this. But redesigning the lever to be thicker or a different shape would make it less like the historical artifact.

This raised the question: Why could I use the model without breaking the lever, but no one else could?

The team makes a plan

THE TEAM: A flurry of discussion led to growing consensus – the crux of the issue was not the model, it was the user.

The original Kassel Hand’s wearer would have learned to use their prosthesis through practice. Likewise, our team had learned to use the model over time. Through the process of design and development, prototyping and printing, we were inadvertently practicing how to operate it.

We needed to teach others to do the same. And this called for a two-pronged approach.

Perspective on using the Kassel Hand, as a modern prosthetist.

The engineers reexamined the opening through which the release trigger poked out of the model. They proposed shortening it to limit how far back users could pull it. When we checked how this change would affect the model’s accuracy, we found that a smaller opening was actually closer to the artifact’s dimensions. While the larger opening had been necessary for an earlier version of the release lever that needed to travel farther, now it only caused problems. The engineers got to work.

The historians, meanwhile, created plans to document and share the various techniques to operating the model the team hadn’t realized it had honed. To teach someone at home how to operate their own copy, we filmed a short video explaining how to lock and release the fingers and troubleshoot when a finger sticks.

Testing the plan

Exactly one week after what we called “the Birmingham Break,” we shared the model with a general audience again. This time we visited a colleague’s history class at Auburn.

We brought four copies. Each had an insert to shorten the opening around the trigger. First, we played our new instructional video on a projector. Then we turned the models over to the students to try.

Four mechanical hand models on display, each slightly different in design
The team brought these four models with inserts to shorten the opening below the release trigger to test with a general audience of undergraduate and graduate students.
Heidi Hausse, CC BY-SA

The result? Not a single broken lever. We publicly launched the project on schedule.

The process of introducing the Kassel Hand model to the public highlights that just as the 16th-century amputee who wore the artifact had to learn to use it, one must learn to use the 3D-printed model, too.

It is a potent reminder that technology is not just a matter of mechanisms and design. It is fundamentally about people – and how people use it.

The Conversation

Heidi Hausse received funding from the Herzog August Bibliothek; the Consortium for History of Science, Technology and Medicine; the American Council of Learned Societies; the Huntington Library; the Society of Fellows in the Humanities at Columbia University; and the Renaissance Society of America.

Peden Jones received funding from Renaissance Society of America.

ref. 3D-printed model of a 500-year-old prosthetic hand hints at life of a Renaissance amputee – https://theconversation.com/3d-printed-model-of-a-500-year-old-prosthetic-hand-hints-at-life-of-a-renaissance-amputee-256670

Trump administration aims to slash funds that preserve the nation’s rich architectural and cultural history

Source: – By Michael R. Allen, Visiting Assistant Professor of History, West Virginia University

The iconic ‘Walking Man’ Hawkes sign in Westbrook, Maine, was added to the National Register of Historic Places in 2019. Ben McCanna/Portland Portland Press Herald via Getty Images

President Donald Trump’s proposed fiscal year 2026 discretionary budget is called a “skinny budget” because it’s short on line-by-line details.

But historic preservation efforts in the U.S. did get a mention – and they might as well be skinned to the bone.

Trump has proposed to slash funding for the federal Historic Preservation Fund to only $11 million, which is $158 million less than the fund’s previous reauthorization in 2024. The presidential discretionary budget, however, always heads to Congress for appropriation. And Congress always makes changes.

That said, the Trump administration hasn’t even released the $188 million that Congress appropriated for the fund for the 2025 fiscal year, essentially impounding the funding stream that Congress created in 1976 for historic preservation activities across the nation.

I’m a scholar of historic preservation who’s worked to secure historic designations for buildings and entire neighborhoods. I’ve worked on projects that range from making distressed neighborhoods in St. Louis eligible for historic tax credits to surveying Cold War-era hangars and buildings on seven U.S. Air Force bases.

I’ve seen the ways in which the Historic Preservation Fund helps local communities maintain and rehabilitate their rich architectural history, sparing it from deterioration, the wrecking ball or the pressures of the private market.

A rare, deficit-neutral funding model

Most Americans probably don’t realize that the task of historic preservation largely falls to individual states and Native American tribes.

The National Historic Preservation Act that President Lyndon B. Johnson signed into law in 1966 requires states and tribes to handle everything from identifying potential historic sites to reviewing the impact of interstate highway projects on archaeological sites and historic buildings. States and tribes are also responsible for reviewing nominations of sites in the National Register of Historic Places, the nation’s official list of properties deemed worthy of preservation.

However, many states and tribes didn’t have the capacity to adequately tackle the mandates of the 1966 act. So the Historic Preservation Fund was formed a decade later to alleviate these costs by funneling federal resources into these efforts.

The fund is actually the product of a conservative, limited-government approach.

Created during Gerald Ford’s administration, it has a revenue-neutral model, meaning that no tax dollars pay for the program. Instead, it’s funded by private lease royalties from the Outer Continental Shelf oil and gas reserves.

Most of these reserves are located in federal waters in the Gulf of Mexico and off the coast of Alaska. Private companies that receive a permit to extract from them must agree to a lease with the federal government. Royalties from their oil and gas sales accrue in federally controlled accounts under the terms of these leases. The Office of Natural Resources Revenue then directs 1.5% of the total royalties to the Historic Preservation Fund.

Congress must continually reauthorize the amount of funding reserved for the Historic Preservation Fund, or it goes unfunded.

A plaque honoring Fenway Park is displayed on an easel on a baseball field.
Boston’s Fenway Park was added to the National Register of Historic Places in 2012, making it eligible for preservation grants and federal tax incentives.
Winslow Townson/Getty Images

Despite bipartisan support, the fund has been threatened in the past. President Ronald Reagan attempted to do exactly what Trump is doing now by making no request for funding at all in his 1983 budget. Yet the fund has nonetheless been reauthorized six times since its inception, with terms ranging from five to 10 years.

The program is a crucial source of funding, particularly in small towns and rural America, where privately raised cultural heritage funds are harder to come by. It provides grants for the preservation of buildings and geographical areas that hold historical, cultural or spiritual significance in underrepresented communities. And it’s even involved in projects tied to the nation’s 250th birthday in 2026, such as the rehabilitation of the home in New Jersey where George Washington was stationed during the winter of 1778-79 and the restoration of Rhode Island’s Old State House.

Filling financial gaps

I’ve witnessed the fund’s impact firsthand in small communities across the nation.

Edwardsville, Illinois, a suburb of St. Louis, is home to the Leclaire Historic District. In the 1970s, it was added to the National Register of Historic Places. The national designation recognized the historic significance of the district, protecting it against any adverse impacts from federal infrastructure funding. It also made tax credits available to the town. Edwardsville then designated LeClaire a local historic district so that it could legally protect the indelible architectural features of its homes, from original decorative details to the layouts of front porches.

Despite the designation, however, there was no clear inventory of the hundreds of houses in the district. A few paid staffers and a volunteer citizen commission not only had to review proposed renovations and demolitions, but they also had to figure out which buildings even contributed to LeClaire’s significance and which ones did not – and thus did not need to be tied up in red tape.

Black and white photo of family standing in front of their home.
The Allen House is one of approximately 415 single-family homes in the Leclaire neighborhood in Edwardsville, Ill.
Friends of Leclaire

Edwardsville was able to secure a grant through the Illinois State Historic Preservation Office thanks to a funding match enabled by money disbursed to Illinois via the Historic Preservation Fund.

In 2013, my team created an updated inventory of the historic district, making it easier for the local commission to determine which houses should be reviewed carefully and which ones don’t need to be reviewed at all.

Oil money better than no money

The historic preservation field, not surprisingly, has come out strongly against Trump’s proposal to defund the Historic Preservation Fund.

Nonetheless, there have been debates within the field over the fund’s dependence on the fossil fuel industry, which was the trade-off that preservationists made decades ago when they crafted the funding model.

In the 1970s, amid the national energy crisis, conservation of existing buildings was seen as a worthy ecological goal, since demolition and new construction required fossil fuels. To preservationists, diverting federal carbon royalties seemed like a power play.

But with the effects of climate change becoming impossible to ignore, some preservationists are starting to more openly critique both the ethics and the wisdom of tapping into a pool of money created through the profits of the oil and gas industry. I’ve recently wondered myself if continued depletion of fossil fuels means that preservationists won’t be able to count on the Historic Preservation Fund as a long-term source of funding.

That said, you’d be hard-pressed to find a preservationist who thinks that destroying the Historic Preservation Fund would be a good first step in shaping a more visionary policy.

For now, Trump’s administration has only sown chaos in the field of historic preservation. Already, Ohio has laid off one-third of the staffers in its State Historic Preservation Office due to the impoundment of federal funds. More state preservation offices may follow suit. The National Council of State Historic Preservation Officers predicts that states soon could be unable to perform their federally mandated duties.

Unfortunately, many people advocating for places important to their towns and neighborhoods may end up learning the hard way just what the Historic Preservation Fund does.

The Conversation

Michael R. Allen is a member of the Advisor Leadership Team of the National Trust for Historic Preservation.

ref. Trump administration aims to slash funds that preserve the nation’s rich architectural and cultural history – https://theconversation.com/trump-administration-aims-to-slash-funds-that-preserve-the-nations-rich-architectural-and-cultural-history-258889

Philly psychology students map out local landmarks and hidden destinations where they feel happiest

Source: – By Eric Zillmer, Professor of Neuropsychology, Drexel University

Rittenhouse Square Park in Center City made it onto the Philly Happiness Map. Matthew Lovette/Jumping Rocks/Universal Images Group via Getty Images

What makes you happy? Perhaps a good night’s sleep, or a wonderful meal with friends?

I am the director of the Happiness Lab at Drexel University, where I also teach a course on happiness. The Happiness Lab is a think tank that investigates the ingredients that contribute to people’s happiness.

Often, my students ask me something along the lines of, “Dr. Z, tell us one thing that will make us happier.”

As a first step, I advise them to spend more time outside.

Achieving lasting and sustainable happiness is more complicated. Research on the happiest countries in the world and the places where people live the longest, known as Blue Zones, shows a common thread: Residents feel they are part of something larger than themselves, such as a community or a city.

So if you’re living in a metropolis like Philadelphia, where, incidentally, the iconic pursuit of happiness charge was ratified in the Declaration of Independence, I believe urban citizenship – that is, forming an identity with your urban surroundings – should also be on your list.

A small boat floats in blue-green waters in front of a picturesque village.
The Greek island of Ikaria in the Aegean Sea is a Blue Zone famous for its residents’ longevity.
Nicolas Economou/NurPhoto via Getty Images

Safety, social connection, beauty

Carl Jung, the renowned Swiss psychoanalyst, wrote extensively about the relationship between our internal world and our external environment.

He believed that this relationship was crucial to our psychological well-being.

More recent research in neuroscience and functional imaging has revealed a vast, intricate and complex neurological architecture underlying our psychological perception of a place. Numerous neurological pathways and functional loops transform a complex neuropsychological process into a simple realization: I am happy here!

For example, a happy place should feel safe.

The country of Croatia, a tourist haven for its beauty and culinary delights, is also one of the top 20 safest countries globally, according to the 2025 Global Peace Index.

The U.S. ranks 128th.

The availability of good food and drink can also be a significant factor in creating a happy place.

However, according to American psychologist Abraham Maslow, a pioneer in the field of positive psychology, the opportunity for social connectivity, experiencing something meaningful and having a sense of belonging is more crucial.

Furthermore, research on happy places suggests that they are beautiful. It should not come as a surprise that the happiest places in the world are also drop-dead gorgeous, such as the Indian Ocean archipelago of Mauritius, which is the happiest country in Africa, according to the 2025 World Happiness Report from the University of Oxford and others.

Happy places often provide access to nature and promote active lifestyles, which can help relieve stress. The residents of the island of Ikaria in Greece, for example, one of the original Blue Zones, demonstrate high levels of physical activity and social interaction.

A Google map display on right with a list of mapped locations on the left.
A map of 28 happy places in Philadelphia, based on 243 survey responses from Drexel students.
The Happiness Lab at Drexel University

Philly Happiness Map

I asked my undergraduate psychology students at Drexel, many of whom come from other cities, states and countries, to pick one place in Philadelphia where they feel happy.

From the 243 student responses, the Happiness Lab curated 28 Philly happy places, based on how frequently the places were endorsed and their accessibility.

Philadelphia’s founder, William Penn, would likely approve that Rittenhouse Square Park and three other public squares – Logan, Franklin and Washington – were included. These squares were vital to Penn’s vision of landscaped public parks to promote the health of the mind and body by providing “salubrious spaces similar to the private garden.” They are beautiful and approachable, serving as “places to rest, take a pause, work, or read a book,” one student told us.

Places such as the Philadelphia Zoo, Penn’s Landing and the Philadelphia Museum of Art are “joyful spots that are fun to explore, and one can also take your parents along if need be,” as another student described.

The Athenaeum of Philadelphia, a historic library with eclectic programming, feels to one student like “coming home, a perfect third place.”

Some students mentioned happy places that are less known. These include tucked-away gardens such as the John F. Collings Park at 1707 Chestnut St., the rooftop Cira Green at 129 S. 30th St. and the James G. Kaskey Memorial Park and BioPond at 433 S. University Ave.

A stone-lined brick path extends through a nicely landscaped outdoor garden area.
The James G. Kaskey Memorial Park and BioPond in West Philadelphia is an urban oasis.
M. Fischetti for Visit Philadelphia

My students said these are small, unexpected spots that provide an excellent opportunity for a quiet, peaceful break, to be present, whether enjoyed alone or with a friend. I checked them out and I agree.

The students also mentioned places I had never heard of even though I’ve lived in the city for over 30 years.

The “cat park” at 526 N. Natrona St. in Mantua is a quiet little park with an eclectic personality and lots of friendly cats.

Mango Mango Dessert at 1013 Cherry St. in Chinatown, which is a frequently endorsed happiness spot among the students because of its “bustling streets, lively atmosphere and delicious food,” is a perfect pit stop for mango lovers. And Maison Sweet, at 2930 Chestnut St. in University City, is a casual bakery and cafe “where you may end up staying longer than planned,” one student shared.

I find that Philly’s happy places, as seen through the eyes of college students, tend to offer a space for residents to take time out from their day to pause, reset, relax and feel more connected and in touch with the city.

Happiness principals are universal, yet our own journeys are very personal. Philadelphians across the city may have their own list of happy places. There are really no right or wrong answers. If you don’t have a personal happy space, just start exploring and you may be surprised what you will find, including a new sense of happiness.

See the full Philly Happiness Map list here, and visit the exhibit at the W.W. Hagerty Library at Drexel University to learn more.

Read more of our stories about Philadelphia.

The Conversation

Eric Zillmer does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Philly psychology students map out local landmarks and hidden destinations where they feel happiest – https://theconversation.com/philly-psychology-students-map-out-local-landmarks-and-hidden-destinations-where-they-feel-happiest-258790

A preservative removed from childhood vaccines 20 years ago is still causing controversy today − a drug safety expert explains

Source: – By Terri Levien, Professor of Pharmacy, Washington State University

A discredited study published in 1989 first alleged a link between thimerosal and autism. Flavio Coelho/Moment via Getty Images

An expert committee that advises the Centers for Disease Control and Prevention voted on June 26, 2025, to cease recommending the use of a mercury-based chemical called thimerosal in flu vaccines. Only a small number of flu vaccines – ones that are produced in multi-dose vials – currently contain thimerosal.

Thimerosal is almost never used in vaccines anymore, but vaccine skeptics have falsely claimed it carries health risks to the brain. Public health experts have raised concerns that the committee’s action against thimerosal may shake public trust and sow confusion about the safety of vaccines.

The committee, called the Advisory Committee on Immunization Practices, or ACIP, was meeting for the first time since Health Secretary Robert F. Kennedy Jr. abruptly replaced its 17 members with eight handpicked ones on June 11.

The committee generally discusses and votes on recommendations for specific vaccines. For this meeting, vaccines for COVID-19, human papillomavirus, influenza and other infectious diseases were on the schedule.

I’m a pharmacist and expert on drug information with 35 years of experience critically evaluating the safety and effectiveness of medications in clinical trials. No evidence supports the idea that thimerosal, used as a preservative in vaccines, is unsafe or carries any health risks.

What is thimerosal?

Thimerosal, also known as thiomersal, is a preservative that has been used in some drug products since the 1930s because it prevents contamination by killing microbes and preventing their growth.

In the human body, thimerosal is metabolized, or changed, to ethylmercury, an organic derivative of mercury. Studies in infants have shown that ethylmercury is quickly eliminated from the blood.

Even though thimerosal is no longer used in childhood vaccines, many parents still worry about whether it can harm their kids.

Ethylmercury is sometimes confused with methylmercury. Methylmercury is known to be toxic and is associated with many negative effects on brain development even at low exposure. Environmental researchers identified the neurotoxic effects of mercury in children in the 1970s, primarily resulting from exposure to methylmercury in fish. In the 1990s, the Environmental Protection Agency and the Food and Drug Administration established limits for maximum recommended exposure to methylmercury, especially for children, pregnant women and women of childbearing age.

Why is thimerosal controversial?

Fears about the safety of thimerosal in vaccines spread for two reasons.

First, in 1998, a now discredited report was published in a major medical journal called The Lancet. In it, a British doctor named Andrew Wakefield described eight children who developed autism after receiving the MMR vaccine, which protects against measles, mumps and rubella. However, the patients were not compared with a control group that was vaccinated, so it was impossible to draw conclusions about the vaccine’s effects. Also, the data report was later found to be falsified. And the MMR vaccine that children received in that report never contained thimerosal.

Second, the federal guidelines on exposure limits for the toxic substance methylmercury came out about the same time as the Wakefield study’s publication. During that period, autism was becoming more widely recognized as a developmental condition, and its rates of diagnosis were rising. People who believed Wakefield’s results conflated methylmercury and ethylmercury and promoted the unfounded idea that ethylmercury in vaccines from thimerosal were driving the rising rates of autism.

The Wakefield study was retracted in 2010, and Wakefield was found guilty of dishonesty and flouting ethics protocols by the U.K. General Medical Council, as well as stripped of his medical license. Subsequent studies have not shown a relationship between the MMR vaccine and autism, but despite the absence of evidence, the idea took hold and has proved difficult to dislodge.

Grumpy white baby giving side-eye to an older white male doctor about to administer a vaccine
The Wakefield study severely damaged many parents’ faith in the MMR vaccine, even though its results were eventually shown to be fraudulent.
Peter Dazeley/The Image Bank, Getty Images

Have scientists tested whether thimerosal is safe?

No unbiased research to date has identified toxicity caused by ethylmercury in vaccines or a link between the substance and autism or other developmental concerns – and not from lack of looking.

A 1999 review conducted by the Food and Drug Administration in response to federal guidelines on limiting mercury exposure found no evidence of harm from thimerosal as a vaccine preservative other than rare allergic reactions. Even so, as a precautionary measure in response to concerns about exposure to mercury in infants, the American Academy of Pediatrics and the U.S. Public Health Service issued a joint statement in 1999 recommending removal of thimerosal from vaccines.

At that time, just one childhood vaccine was available only in a version that contained thimerosal as an ingredient. This was a vaccine called DTP, for diphtheria, tetanus and pertussis. Other childhood vaccines were either available only in formulations without thimerosal or could be obtained in versions that did not contain it.

By 2001, U.S. manufacturers had removed thimerosal from almost all vaccines – and from all vaccines in the childhood vaccination schedule.

In 2004, the U.S. Institute of Medicine Immunization Safety Review Committee reviewed over 200 scientific studies and concluded there is no causal relationship between thimerosal-containing vaccines and autism. Additional well-conducted studies reviewed independently by the CDC and by the FDA did not find a link between thimerosal-containing vaccines and autism or neuropsychological delays.

How is thimerosal used today?

In the U.S., most vaccines are now available in single-dose vials or syringes. Thimerosal is found only in multi-dose vials that are used to supply vaccines for large-scale immunization efforts – specifically, in a small number of influenza vaccines. It is not added to modern childhood vaccines, and people who get a flu vaccine can avoid it by requesting a vaccine supplied in a single-dose vial or syringe.

Thimerosal is still used in vaccines in some other countries to ensure continued availability of necessary vaccines. The World Health Organization continues to affirm that there is no evidence of toxicity in infants, children or adults exposed to thimerosal-containing vaccines.

This article was updated to include ACIP’s vaccine recommendations.

The Conversation

Terri Levien does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. A preservative removed from childhood vaccines 20 years ago is still causing controversy today − a drug safety expert explains – https://theconversation.com/a-preservative-removed-from-childhood-vaccines-20-years-ago-is-still-causing-controversy-today-a-drug-safety-expert-explains-259442