Colleges teach the most valuable career skills when they don’t stick narrowly to preprofessional education

Source: The Conversation – USA (2) – By Daniel V. McGehee, Professor of Industrial and Systems Engineering, University of Iowa

Tracking graduates’ earnings is just one way to measure the benefit of higher education. iStock/Getty Images Plus

Across state legislatures and in Congress, debates are intensifying about the value of funding certain college degree programs – and higher education, more broadly.

The growing popularity of professional graduate degrees over the past several decades – including programs in business administration and engineering management – has reshaped the economics of higher education. Unlike traditional academic graduate programs, which are often centered on research and scholarship, these professionally oriented degrees are designed primarily for workforce advancement and typically charge much higher tuition.

These programs are often expensive for students and are sometimes described as cash-cow degrees for colleges and universities, because the tuition revenue far exceeds the instructional costs.

Some universities and colleges also leverage their brands to offer online, executive or certificate-based versions of these programs, attracting many students from the U.S. and abroad who pay the full tuition. This steady revenue helps universities subsidize tuition for other students who cannot pay the full rate, among other things.

Yet a quiet tension underlies this evolution in higher education – the widening divide between practical, technical training and a comprehensive education that perhaps is more likely to encourage students to inquire, reflect and innovate as they learn.

An overlooked factor

Some states, including Texas, track salary data for graduates of every program to measure worth through short-term earnings. This approach may strike many students and their families as useful, but I believe it overlooks a part of what makes higher education valuable.

A healthy higher education system depends not only on producing employable graduates but also on cultivating citizens and leaders who can interpret uncertainty, question assumptions and connect ideas across disciplines.

When assessing disciplines such as English, philosophy, history and world languages, I think that we should acknowledge their contributions to critical thought, communication and ethical reasoning.

These academic disciplines encourage students to synthesize ideas, construct arguments and engage in meaningful debate. Some law schools often draw their strongest students from these backgrounds because they nurture analytical and rhetorical skills essential for navigating complex civic and legal issues.

Historically, poets and writers have often been among the first to be silenced by authoritarian regimes. It’s a reminder of the societal power of inquiry and expression that I believe higher education should protect.

A group of young people wear white jackets and stand around a dummy dressed with a pink blanket over it in a hospital bed.
Undergraduate students who want to become doctors or work in other specialized fields are often encouraged to take only classes that connect with their long-term career trajectory.
Glenn Beil/Florida A&M University via Getty Images

Why students stay on narrow professional paths

Students entering college today face significant pressure to choose what they might see as safe majors that will result in a well-paying career. For aspiring physicians and engineers, the path is often scripted early by steering them toward physical and biosciences. High test scores, internships and other stepping stones are treated as nonnegotiable. Parents and peers can reinforce this mindset.

Most colleges and universities do not reward a future medical student who wants to major in comparative literature, or an engineering student who is spending time on philosophy.

Students’ majors also typically place course requirements on them, in addition to a school’s general course requirements. This often does not leave a lot of room for students to experiment with different classes, especially if they are pursuing vocationally focused majors, such as engineering.

As a result, I’ve seen many students trade curiosity for credentialing, believing that professional identity must come before intellectual exploration.

As someone who began my education in psychology and later transitioned into engineering, I have seen how different intellectual traditions approach the same human questions. Psychology teaches people to observe behavior and design experiments. Engineering trains students to model systems and optimize performance.

When combined, they help reveal how humans interact with technology and how technological solutions reshape human behavior.

In my view, these are questions neither field can answer alone.

Initiative is the missing ingredient

One of the most important and often overlooked ingredients in thriving high tech, medical and business environments is initiative. I believe students in the humanities routinely practice taking initiative by framing questions, interpreting incomplete information and proposing original arguments. These skills are crucial for scientific or business innovation, but they are often not emphasized in structured science, technology, engineering and mathematics – or STEM – coursework.

Initiative involves the willingness to move first and to see around corners, defining the next what-if, rallying others and building something meaningful even when the path is uncertain.

To help my engineering students practice taking initiative, I often give them deliberately vague instructions – something they rarely experience in their coursework. Many students, even highly capable ones, hesitate to take initiative because their schooling experience has largely rewarded caution and compliance over exploration. They wait for clarity or for permission – not because they lack ability, but because they are afraid to be wrong.

Yet in business, research labs, design studios, hospitals and engineering firms, initiative is the quality employers most urgently need and cannot easily teach. Broader educational approaches help cultivate this confidence by encouraging students to interpret ambiguity rather than avoid it.

How teaching can evolve

Helping all students develop a sense of initiative and innovation requires university leaders to rethink what success looks like.

Universities can begin with achievable steps, such as rewarding cross-disciplinary teaching and joint appointments in promotion and tenure criteria.

At the University of Iowa’s Driving Safety Research Institute, where our teams blend engineering, medicine, public health and psychology, students quickly learn that a safe automated vehicle is not just a technical system but also a behavioral one. Understanding how human drivers respond to automation is as important as the algorithms that govern the vehicle.

Other institutions are modeling this approach of integrating social, behavioral and physical sciences.

Olin College of Engineering, a school in Needham, Massachusetts, builds every project around both technical feasibility and human context. Courses are often co-taught by humanities and engineering professors, and projects require students to articulate not only what they built but why it matters.

Still, integrating liberal and technical education is difficult in practice. Professional curricula often overflow with accreditation requirements. Faculty incentives reward specialization more than collaboration. Students and parents, anxious about debt and job security, hesitate to spend credits outside of a student’s major.

Rethinking what success means

I believe that higher education’s purpose is not to produce uniform workers but adaptable thinkers.

It might not be productive to center conversations about defending the liberal arts or glorifying STEM. Rather, I think that people’s focus should be on recognizing that each field is incomplete without the other.

Education for a complex world must cultivate depth, initiative and perspective. When students connect disciplines, question assumptions and act with purpose, they are prepared not only for their first job but for a lifetime of learning and leadership.

The Conversation

Daniel V. McGehee does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Colleges teach the most valuable career skills when they don’t stick narrowly to preprofessional education – https://theconversation.com/colleges-teach-the-most-valuable-career-skills-when-they-dont-stick-narrowly-to-preprofessional-education-270025

From concrete to community: How synthetic data can make urban digital twins more humane

Source: The Conversation – USA – By Wei Zhai, Associate Professor of Public Affairs and Planning, University of Texas at Arlington

How people behave is a critical element of how cities function. Ahmed Deeb/picture alliance via Getty Images

When city leaders talk about making a town “smart,” they’re usually talking about urban digital twins. These are essentially high-tech, 3D computer models of cities. They are filled with data about buildings, roads and utilities. Built using precision tools like cameras and LiDAR – light detection and ranging – scanners, these twins are great at showing what a city looks like physically.

But in their rush to map the concrete, researchers, software developers and city planners have missed the most dynamic part of urban life: people. People move, live and interact inside those buildings and on those streets.

This omission creates a serious problem. While an urban digital twin may perfectly replicate the buildings and infrastructure, it often ignores how people use the parks, walk on the sidewalks, or find their way to the bus. This is an incomplete picture; it cannot truly help solve complex urban challenges or guide fair development.

To overcome this problem, digital twins will need to widen their focus beyond physical objects and incorporate realistic human behaviors. Though there is ample data about a city’s inhabitants, using it poses a significant privacy risk. I’m a public affairs and planning scholar. My colleagues and I believe the solution to producing more complete urban digital twins is to use synthetic data that closely approximates real people’s data.“

Digital twins are more than simulations.

The privacy barrier

To build a humane, inclusive digital twin, it’s critical to include detailed data on how people behave. And the model should represent the diversity of a city’s population, including families with young children, disabled residents and retirees. Unfortunately, relying solely on real-world data is impractical and ethically challenging.

The primary obstacles are significant, starting with strict privacy laws. Rules such as the European Union’s General Data Protection Regulation, or GDPR, often prevent researchers and others from widely sharing sensitive personal information. This wall of privacy stops researchers from easily comparing results and limits our ability to learn from past studies.

Furthermore, real-world data is often unfair. Data collection tends to be uneven, missing large groups of people. Training a computer model using data where low-income neighborhoods have sparse sensor coverage means the model will simply repeat and even magnify that original unfairness. To compensate for this, researchers can use the statistical technique of weighting the data in the models to make up for the underrepresentation.

Synthetic data offers a practical solution. It is artificial information generated by computers that mimics the statistical patterns of real-world data. This protects privacy while filling critical data gaps.

Synthetic data: Tool for fairer cities

Adding synthetic human dynamics fundamentally changes digital twins. It shifts them from static models of infrastructure to dynamic simulations that show how people live in the city. By generating synthetic patterns of walking, bus riding and public space use, planners can include a wider, more inclusive range of human actions in the models.

For example, Bogotá, Colombia, is using a digital twin to model its TransMilenio bus rapid transit system. Instead of relying only on limited or privacy-sensitive real-world sensor data, the city planners generated synthetic data to fill the digital twin. Such data artificially creates millions of simulated bus arrivals, vehicle speeds and queue lengths, all based on the statistical patterns – peak times, off-peak times – of actual TransMilenio operations.

This approach transforms urban planning in several crucial ways, making simulations more realistic and diverse. For example, planners can use synthetic pedestrian data to model how elderly and disabled residents would navigate a new urban design.

It also allows for risk-free testing of ideas. Planners can simulate diverse synthetic populations to see how a new flood evacuation plan would affect various groups, all without risking anyone’s safety or privacy in the real world.

Cities are increasingly building digital twins for planning and development.

Making digital twins trustworthy

For all the promises of synthetic data, it can only be helpful if planners can trust it. Since they base major decisions on these virtual worlds, the synthetic data must be proved to be a reliable replacement for real-world data. Planners can test this by checking to see if the main policy decisions they reach using the synthetic data are the same ones they would have made using real-world data that puts people’s privacy at risk. If the decisions match, the synthetic data is trustworthy enough to use for that planning task going forward.

Beyond technical checks, it’s important to consider fairness. This means routinely auditing the synthetic models to check for any hidden biases or underrepresentation across different groups. For example, planners can make sure an emergency evacuation plan in the urban digital twin works for elderly residents with mobility issues.

Most importantly, I believe planners should include their communities. Establishing citizen advisory boards and designing the synthetic data and simulation scenarios directly with the people who live in the city helps ensure that their experiences are accurately reflected.

By moving beyond static infrastructure to dynamic environments that include people’s behavior, synthetic data is set to play a critical role in urban planning. It will shape the resilient, inclusive and human-centered urban digital twins of the future.

The Conversation

Wei Zhai receives funding from National Science Foundation.

ref. From concrete to community: How synthetic data can make urban digital twins more humane – https://theconversation.com/from-concrete-to-community-how-synthetic-data-can-make-urban-digital-twins-more-humane-268847

The ChatGPT effect: In 3 years the AI chatbot has changed the way people look things up

Source: The Conversation – USA – By Deborah Lee, Professor and Director of Research Impact and AI Strategy, Mississippi State University

ChatGPT has become the go-to app for hundreds of millions of people. AP Photo/Kiichiro Sato

Three years ago, if someone needed to fix a leaky faucet or understand inflation, they usually did one of three things: typed the question into Google, searched YouTube for a how-to video or shouted desperately at Alexa for help.

Today, millions of people start with a different approach: They open ChatGPT and just ask.

I’m a professor and director of research impact and AI strategy at Mississippi State University Libraries. As a scholar who studies information retrieval, I see that this shift of the tool people reach for first for finding information is at the heart of how ChatGPT has changed everyday technology use.

Change in searching

The biggest change isn’t that other tools have vanished. It’s that ChatGPT has become the new front door to information. Within months of its introduction on Nov. 30, 2022, ChatGPT had 100 million weekly users. By late 2025, that figure had grown to 800 million. That makes it one of the most widely used consumer technologies on the planet.

Surveys show that this use isn’t just curiosity – it reflects a real change in behavior. A 2025 Pew Research Center study found that 34% of U.S. adults have used ChatGPT, roughly double the share found in 2023. Among adults under 30, a clear majority (58%) have tried it. An AP-NORC poll reports that about 60% of U.S. adults who use AI say they use it to search for information, making this the most common AI use case. The number rises to 74% for the under-30 crowd.

Traditional search engines are still the backbone of the online information ecosystem, but the kind of searching people do has shifted in measurable ways since ChatGPT entered the scene. People are changing which tool they reach for first.

For years, Google was the default for everything from “how to reset my router” to “explain the debt ceiling.” These basic informational queries made up a huge portion of search traffic. But these quick, clarifying, everyday “what does this mean” questions are the ones ChatGPT now answers faster and more cleanly than a page of links.

And people have noticed. A 2025 U.S. consumer survey found that 55% of respondents now use OpenAI’s ChatGPT or Google’s Gemini AI chatbots about tasks they previously would have asked Google search to help them with, with even higher usage figures for the U.K. Another analysis of more than 1 billion search sessions found that traffic from generative AI platforms is growing 165 times faster than traditional searches, and about 13 million U.S. adults have already made generative AI their go-to tool for online discovery.

This doesn’t mean people have stopped “Googling,” but it means ChatGPT has peeled off the kinds of questions for which users want a direct explanation instead of a list of links. Curious about a policy update? Need a definition? Want a polite way to respond to an uncomfortable email? ChatGPT is faster, feels more conversational and feels more definitive.

At the same time, Google isn’t standing still. Its search results look different than they did three years ago because Google started weaving its AI system Gemini directly into the top of the page. The “AI Overview” summaries that appear above traditional search links now instantly answer many simple questions – sometimes accurately, sometimes less so.

But either way, many people never scroll past that AI-generated snapshot. This fact combined with the impact of ChatGPT are the reasons the number of “zero-click” searches has surged. One report using Similarweb data found that traffic from Google to news sites fell from over 2.3 billion visits in mid-2024 to under 1.7 billion in May 2025, while the share of news-related searches ending in zero clicks jumped from 56% to 69% in one year.

Google search excels at pointing to a wide range of sources and perspectives, but the results can feel cluttered and designed more for clicks than clarity. ChatGPT, by contract, delivers a more focused and conversational response that prioritizes explanation over ranking. The ChatGPT response can lack the source transparency and multiple viewpoints often found in a Google search.

In terms of accuracy, both tools can occasionally get it wrong. Google’s strength lies in letting users cross-check multiple sources, while ChatGPT’s accuracy depends heavily on the quality of the prompt and the user’s ability to recognize when a response should be verified elsewhere.

OpenAI is aiming to make it even more appealing to turn to ChatPGT first for search by trying to get people to use a browser with ChatGPT built in.

Smart speakers and YouTube

The impact of ChatGPT has reverberated beyond search engines. Voice assistants, such as Alexa speakers and Google Home, continue to report high ownership, but that number is down slightly. One 2025 summary of voice-search statistics estimates that about 34% of people ages 12 and up own a smart speaker, down from 35% in 2023. This is not a dramatic decline, but the lack of growth may indicate a shift of more complex queries to ChatGPT or similar tools. When people want a detailed explanation, a step-by-step plan or help drafting something, a voice assistant that answers in a short sentence suddenly feels limited.

By contrast, YouTube remains a giant. As of 2024, it had approximately 2.74 billion users, with that number increasing steadily since 2010. Among U.S. teens, about 90% say they use YouTube, making it the most widely used platform in that age group. But what kind of videos people are looking for is changing.

People now tend to start with ChatGPT and then move to YouTube if they need the additional information a how-to video conveys. For many everyday tasks, such as “explain my health benefits” or “help me write a complaint email,” people ask ChatGPT for a summary, script or checklist. They head to YouTube only if they need to see a physical process.

You can see a similar pattern in more specialized spaces. Software engineers, for instance, have long relied on sites such as Stack Overflow for tips and pieces of software code. But question volume there began dropping sharply after ChatGPT’s release, and one analysis suggests overall traffic fell by about 50% between 2022 and 2024. When a chatbot can generate a code snippet and an explanation on demand, fewer people bother typing a question into a public forum.

So where does that leave us?

Three years in, ChatGPT hasn’t replaced the rest of the tech stack; it’s reordered it. The default search has shifted. Search engines are still for deep dives and complex comparisons. YouTube is still for seeing real people do real things. Smart speakers are still for hands-free convenience.

But when people need to figure something out, many now start with a chat conversation, not a search box. That’s the real ChatGPT effect: It didn’t just add another app to our phones – it quietly changed how we look things up in the first place.

The Conversation

Deborah Lee does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. The ChatGPT effect: In 3 years the AI chatbot has changed the way people look things up – https://theconversation.com/the-chatgpt-effect-in-3-years-the-ai-chatbot-has-changed-the-way-people-look-things-up-270143

When darkness shines: How dark stars could illuminate the early universe

Source: The Conversation – USA – By Alexey A. Petrov, Professor of physics and astronomy, University of South Carolina

NASA’s James Webb Space Telescope has spotted some potential dark star candidates. NASA, ESA, CSA, and STScI

Scientists working with the James Webb Space Telescope discovered three unusual astronomical objects in early 2025, which may be examples of dark stars. The concept of dark stars has existed for some time and could alter scientists’ understanding of how ordinary stars form. However, their name is somewhat misleading.

“Dark stars” is one of those unfortunate names that, on the surface, does not accurately describe the objects it represents. Dark stars are not exactly stars, and they are certainly not dark.

Still, the name captures the essence of this phenomenon. The “dark” in the name refers not to how bright these objects are, but to the process that makes them shine — driven by a mysterious substance called dark matter. The sheer size of these objects makes it difficult to classify them as stars.

As a physicist, I’ve been fascinated by dark matter, and I’ve been trying to find a way to see its traces using particle accelerators. I’m curious whether dark stars could provide an alternative method to find dark matter.

What makes dark matter dark?

Dark matter, which makes up approximately 27% of the universe but cannot be directly observed, is a key idea behind the phenomenon of dark stars. Astrophysicists have studied this mysterious substance for nearly a century, yet we haven’t seen any direct evidence of it besides its gravitational effects. So, what makes dark matter dark?

A pie chart showing the composition of the universe. The largest proportion is 'dark energy,' at 68%, while dark matter makes up 27% and normal matter 5%. The rest is neutrinos, free hydrogen and helium and heavy elements.
Despite physicists not knowing much about it, dark matter makes up around 27% of the universe.
Visual Capitalist/Science Photo Library via Getty Images

Humans primarily observe the universe by detecting electromagnetic waves emitted by or reflected off various objects. For instance, the Moon is visible to the naked eye because it reflects sunlight. Atoms on the Moon’s surface absorb photons – the particles of light – sent from the Sun, causing electrons within atoms to move and send some of that light toward us.

More advanced telescopes detect electromagnetic waves beyond the visible spectrum, such as ultraviolet, infrared or radio waves. They use the same principle: Electrically charged components of atoms react to these electromagnetic waves. But how can they detect a substance – dark matter – that not only has no electric charge but also has no electrically charged components?

Although scientists don’t know the exact nature of dark matter, many models suggest that it is made up of electrically neutral particles – those without an electric charge. This trait makes it impossible to observe dark matter in the same way that we observe ordinary matter.

Dark matter is thought to be made of particles that are their own antiparticles. Antiparticles are the “mirror” versions of particles. They have the same mass but opposite electric charge and other properties. When a particle encounters its antiparticle, the two annihilate each other in a burst of energy.

If dark matter particles are their own antiparticles, they would annihilate upon colliding with each other, potentially releasing large amounts of energy. Scientists predict that this process plays a key role in the formation of dark stars, as long as the density of dark matter particles inside these stars is sufficiently high. The dark matter density determines how often dark matter particles encounter, and annihilate, each other. If the dark matter density inside dark stars is high, they would annihilate frequently.

What makes a dark star shine?

The concept of dark stars stems from a fundamental yet unresolved question in astrophysics: How do stars form? In the widely accepted view, clouds of primordial hydrogen and helium — the chemical elements formed in the first minutes after the Big Bang, approximately 13.8 billion years ago — collapsed under gravity. They heated up and initiated nuclear fusion, which formed heavier elements from the hydrogen and helium. This process led to the formation of the first generation of stars.

Two bright clouds of gas condensing around a small central region
Stars form when clouds of dust collapse inward and condense around a small, bright, dense core.
NASA, ESA, CSA, and STScI, J. DePasquale (STScI), CC BY-ND

In the standard view of star formation, dark matter is seen as a passive element that merely exerts a gravitational pull on everything around it, including primordial hydrogen and helium. But what if dark matter had a more active role in the process? That’s exactly the question a group of astrophysicists raised in 2008.

In the dense environment of the early universe, dark matter particles would collide with, and annihilate, each other, releasing energy in the process. This energy could heat the hydrogen and helium gas, preventing it from further collapse and delaying, or even preventing, the typical ignition of nuclear fusion.

The outcome would be a starlike object — but one powered by dark matter heating instead of fusion. Unlike regular stars, these dark stars might live much longer because they would continue to shine as long as they attracted dark matter. This trait would make them distinct from ordinary stars, as their cooler temperature would result in lower emissions of various particles.

Can we observe dark stars?

Several unique characteristics help astronomers identify potential dark stars. First, these objects must be very old. As the universe expands, the frequency of light coming from objects far away from Earth decreases, shifting toward the infrared end of the electromagnetic spectrum, meaning it gets “redshifted.” The oldest objects appear the most redshifted to observers.

Since dark stars form from primordial hydrogen and helium, they are expected to contain little to no heavier elements, such as oxygen. They would be very large and cooler on the surface, yet highly luminous because their size — and the surface area emitting light — compensates for their lower surface brightness.

They are also expected to be enormous, with radii of about tens of astronomical units — a cosmic distance measurement equal to the average distance between Earth and the Sun. Some supermassive dark stars are theorized to reach masses of roughly 10,000 to 10 million times that of the Sun, depending on how much dark matter and hydrogen or helium gas they can accumulate during their growth.

So, have astronomers observed dark stars? Possibly. Data from the James Webb Space Telescope has revealed some very high-redshift objects that seem brighter — and possibly more massive — than what scientists expect of typical early galaxies or stars. These results have led some researchers to propose that dark stars might explain these objects.

Artist's impression of the James Webb telescope, which has a hexagonal mirror made up of smaller hexagons, and sits on a rhombus-shaped spacecraft.
The James Webb Space Telescope, shown in this illustration, detects light coming from objects in the universe.
Northrup Grumman/NASA

In particular, a recent study analyzing James Webb Space Telescope data identified three candidates consistent with supermassive dark star models. Researchers looked at how much helium these objects contained to identify them. Since it is dark matter annihilation that heats up those dark stars, rather than nuclear fusion turning helium into heavier elements, dark stars should have more helium.

The researchers highlight that one of these objects indeed exhibited a potential “smoking gun” helium absorption signature: a far higher helium abundance than one would expect in typical early galaxies.

Dark stars may explain early black holes

What happens when a dark star runs out of dark matter? It depends on the size of the dark star. For the lightest dark stars, the depletion of dark matter would mean gravity compresses the remaining hydrogen, igniting nuclear fusion. In this case, the dark star would eventually become an ordinary star, so some stars may have begun as dark stars.

Supermassive dark stars are even more intriguing. At the end of their lifespan, a dead supermassive dark star would collapse directly into a black hole. This black hole could start the formation of a supermassive black hole, like the kind astronomers observe at the centers of galaxies, including our own Milky Way.

Dark stars might also explain how supermassive black holes formed in the early universe. They could shed light on some unique black holes observed by astronomers. For example, a black hole in the galaxy UHZ-1 has a mass approaching 10 million solar masses, and is very old – it formed just 500 million years after the Big Bang. Traditional models struggle to explain how such massive black holes could form so quickly.

The idea of dark stars is not universally accepted. These dark star candidates might still turn out just to be unusual galaxies. Some astrophysicists argue that matter accretion — a process in which massive objects pull in surrounding matter — alone can produce massive stars, and that studies using observations from the James Webb telescope cannot distinguish between massive ordinary stars and less dense, cooler dark stars.

Researchers emphasize that they will need more observational data and theoretical advancements to solve this mystery.

The Conversation

Alexey A Petrov receives funding from the US Department of Energy.

ref. When darkness shines: How dark stars could illuminate the early universe – https://theconversation.com/when-darkness-shines-how-dark-stars-could-illuminate-the-early-universe-266971

Fern stems reveal secrets of evolution – how constraints in development can lead to new forms

Source: The Conversation – USA – By Jacob S. Suissa, Assistant Professor of Plant Evolutionary Biology, University of Tennessee

The lacy frond of the intermediate wood fern (_Dryopteris intermedia_). Jacob S. Suissa, CC BY-ND

There are few forms of the botanical world as readily identifiable as fern leaves. These often large, lacy fronds lend themselves nicely to watercolor paintings and tricep tattoos alike. Thoreau said it best: “Nature made ferns for pure leaves, to show what she could do in that line.”

But ferns are not just for art and gardens. While fern leaves are the most iconic part of their body, these plants are whole organisms, with stems and roots that are often underground or creeping along the soil surface. With over 400 million years of evolutionary history, ferns can teach us a lot about how the diversity of planet Earth came to be. Specifically, examining their inner anatomy can reveal some of the intricacies of evolution.

Sums of parts or an integrated whole?

When one structure cannot change without altering the other, researchers consider them constrained by each other. In biology, this linkage between traits is called a developmental constraint. It explains the limits of what possible forms organisms can take. For instance, why there aren’t square trees or mammals with wheels.

However, constraint does not always limit form. In my recently published research, I examined the fern vascular system to highlight how changes in one part of the organism can lead to changes in another, which can generate new forms.

Close-up of a small, flat green circle with a brown outline, held between two fingers
Cross section of a stem of Adiantum in Costa Rica. If you zoom in, you can make out the radial arrangement of bundles in the stem – the darker dots in the circle at its center.
Jacob S. Suissa, CC BY-ND

Before Charles Darwin proposed his theory of evolution by natural selection, many scientists believed in creationism – the idea that all living things were created by a god. Among these believers was the 19th-century naturalist Georges Cuvier, who is lauded as the father of paleontology. His argument against evolution was not exclusively based in faith but on a theory he called the correlation of parts.

Cuvier proposed that because each part of an organism is developmentally linked to every other part, changes in one part would result in changes to another. With this theory, he argued that a single tooth or bone could be used to reconstruct an entire organism.

He used this theory to make a larger claim: If organisms are truly integrated wholes and not merely sums of individual parts, how could evolution fashion specific traits? Since changes in one part of an organism would necessitate changes in others, he argued, small modifications would require restructuring every other part. If the individual parts of an organism are all fully integrated, evolution of particular traits could not proceed.

However, not all of the parts of an organism are tethered together so tightly. Indeed, some parts can evolve at different rates and under different selection pressures. This idea was solidified as the concept of quasi-independence in the 1970s by evolutionary biologist Richard Lewontin. The idea of organisms as collections of individually evolving parts remains today, influencing how researchers and students think about evolution.

Fern vasculature and the process of evolution

Ferns are one of four lineages of land plants that have vascular tissues – specialized sets of tubes that move water and nutrients through their bodies. These tissues are composed of vascular bundles – clusters of cells that conduct water through the stem.

How vascular bundles are arranged in fern stems varies substantially. Some have as many as three to eight or more vascular bundles scattered throughout their stem. Some are arranged symmetrically, while others such as the tobacco fern – Mickelia nicotianifolia – have bundles arranged in a whimsical, smiley-face pattern.

Cross-section of a roughly oblong stem with a smiley face shape towards one end
Cross section of the rhizome of Mickelia nicotianifolia, showing the smiley-face patterning of the vascular tissues. Each gap in the central system is associated with the production of a leaf.
Jacob S. Suissa, CC BY-ND

For much of the 20th century, scientists studying the pattern and arrangement of vascular bundles in fern stems thought these broad patterns may be adaptive to environmental conditions. I set out in my own research to test whether certain types of arrangements were more resistant to drought. But contrary to my initial hypotheses – and my desire for a relationship between form and function – the arrangement of vascular bundles in the stem did not seem to correlate with drought tolerance.

This may sound counterintuitive, but it turns out the ability of a fern to move water through its body has more to do with the size and shape of the water-conducting cells rather than how they’re arranged as a whole in the stem. This finding is analogous to looking at road maps to understand traffic patterns. The patterning of roads on a map (how cells are arranged) may be less important in determining traffic patterns than the number and size of lanes (cell size and number).

This observation hinted at something deeper about the evolution of the vascular systems of ferns. It sent me on a journey to uncover exactly what gave rise to the varying vascular patterns of ferns.

Simple observations and insights into evolution

I wondered how this variation in the number and arrangement of vascular bundles relates to leaf placement around the stem. So I quantified this variation in vascular patterning for 27 ferns representing roughly 30% of all fern species.

I found a striking correlation between the number of rows of leaves and the number of vascular bundles within the stem. This relationship was almost 1-to-1 in some cases. For instance, if there were three rows of leaves along the stem, there were three vascular bundles in the stem.

What’s more, how leaves were arranged around the stem determined the spatial arrangement of bundles. If the leaves were arranged spirally (on all sides of the stem), the vascular bundles were arranged in a radial pattern. If the leaves were shifted to the dorsal side of the stem, the smiley-face pattern emerged.

Importantly, based on our understanding of plant development, there was a directionality here. Specifically, the placement of leaves determines the arrangement of bundles, not the other way around.

Microscopy images of cross-section of fern stems in different shapes, one a cluster of spots, another concentric circles and another three separate segments
Vascular architectures of three different ferns. From left: Lygodium microphyllum, Sitobolium punctilobulum and Amauropelta noveboracensis.
Jacob S. Suissa, CC BY-ND

This may not sound all that surprising – it seems logical that vasculature should link up between leaves and stems. But it runs counter to how scientists have viewed the fern vascular system for over 100 years. Many studies on fern vascular patterning have tended to focus on individual parts of the plant, removing vascular architecture from the context of the plant as a whole and viewing it as an independently evolving pattern.

However, this new work suggests that the arrangement of vascular bundles in fern stems is not able to change in isolation. Rather, like Cuvier’s idealized organisms, vascular patterning is linked to and explicitly determined by the number and placement of leaves along the stem. This is not to say that vascular patterns could not be adaptive to environmental conditions, but it means that the handle of evolutionary change in the number and arrangement of vascular bundles is likely changes to leaf number and placement.

From parochial to existential

While this study on ferns and their vascular system may seem parochial, it speaks to the broader question of how variation – the fuel of evolution – arises, and how evolution can proceed.

While not all parts of an organism are so tightly linked, considering the individual as a whole – or at least sets of parts as a unit – can help researchers better understand how, and if, observable patterns can evolve in isolation. This insight takes scientists one step closer to understanding the minutia of how evolution works to generate the immense biodiversity on Earth.

Understanding these processes is also important for industry. In agricultural settings, plant and animal breeders attempt to increase one aspect of an organism without changing another. By taking a holistic approach and understanding which parts of an organism are developmentally or genetically linked and which are more quasi-independent, breeders may be able to more effectively create organisms with desired traits.

Slices of  fern stem on a table
Researchers can learn much about evolution from the stems of Mickelia nicotianifolia
Jacob S. Suissa, CC BY-ND

Constraint is often viewed as restricting, but it may not always be so. The Polish nuclear physicist Stanisław Ulam noted that rhymes “compel one to find the unobvious because of the necessity of finding a word which rhymes,” paradoxically acting as an “automatic mechanism of originality.” Whether from the literary rules of a haiku or the development of ferns, constraint can be a generator of form.

The Conversation

Jacob S. Suissa receives funding from The National Science Foundation. He is affiliated with Arnold Arboretum of Harvard University, and Let’s Botanize Inc.

ref. Fern stems reveal secrets of evolution – how constraints in development can lead to new forms – https://theconversation.com/fern-stems-reveal-secrets-of-evolution-how-constraints-in-development-can-lead-to-new-forms-267401

Sea level doesn’t rise at the same rate everywhere – we mapped where Antarctica’s ice melt would have the biggest impact

Source: The Conversation – USA (2) – By Shaina Sadai, Associate in Earth Science, Five College Consortium

Sea-level rise changes coastlines, putting homes at risk, as Summer Haven, Fla., has seen. Aerial Views/E+/Getty Images

When polar ice sheets melt, the effects ripple across the world. The melting ice raises average global sea level, alters ocean currents and affects temperatures in places far from the poles.

But melting ice sheets don’t affect sea level and temperatures in the same way everywhere.

In a new study, our team of scientists investigated how ice melting in Antarctica affects global climate and sea level. We combined computer models of the Antarctic ice sheet, solid Earth and global climate, including atmospheric and oceanic processes, to explore the complex interactions that melting ice has with other parts of the Earth.

Understanding what happens to Antarctica’s ice matters, because it holds enough frozen water to raise average sea level by about 190 feet (58 meters). As the ice melts, it becomes an existential problem for people and ecosystems in island and coastal communities.

A woman stands outside an old home showing where sea level rise has eroded the shoreline nearly to the home's foundation.
Sea level is inching up on homes on Tierra Bomba Island, Colombia, where a cemetery already washed away.
Luis Acosta/AFP via Getty Images

Changes in Antarctica

The extent to which the Antarctic ice sheet melts will depend on how much the Earth warms. And that depends on future greenhouse gas emissions from sources including vehicles, power plants and industries.

Studies suggest that much of the Antarctic ice sheet could survive if countries reduce their greenhouse gas emissions in line with the 2015 Paris Agreement goal to keep global warming to 1.5 degrees Celsius (2.7 Fahrenheit) compared to before the industrial era. However, if emissions continue rising and the atmosphere and oceans warm much more, that could cause substantial melting and much higher sea levels.

Our research shows that high emissions pose risks not just to the stability of the West Antarctic ice sheet, which is already contributing to sea-level rise, but also for the much larger and more stable East Antarctic ice sheet.

It also shows how different regions of the world will experience different levels of sea-level rise as Antarctica melts.

Understanding sea-level change

If sea levels rose like the water in a bathtub, then as ice sheets melt, the ocean would rise by the same amount everywhere. But that isn’t what happens.

Instead, many places experience higher regional sea-level rise than the global average, while places close to the ice sheet can even see sea levels drop. The main reason has to do with gravity.

Ice sheets are massive, and that mass creates a strong gravitational pull that attracts the surrounding ocean water toward them, similar to how the gravitational pull between Earth and the Moon affects the tides.

As the ice sheet shrinks, its gravitational pull on the ocean declines, leading to sea levels falling in regions close to the ice sheet coast and rising farther away. But sea-level changes are not only a function of distance from the melting ice sheet. This ice loss also changes how the planet rotates. The rotation axis is pulled toward that missing ice mass, which in turn redistributes water around the globe.

2 factors that can slow melting

As the massive ice sheet melts, the solid Earth beneath it rebounds.

Underneath the bedrock of Antarctica is Earth’s mantle, which flows slowly like maple syrup. The more the ice sheet melts, the less it presses down on the solid Earth. With less weight on it, the bedrock can rebound. This can lift parts of the ice sheet out of contact with warming ocean waters, slowing the rate of melting. This happens quicker in places where the mantle flows faster, such as underneath the West Antarctic ice sheet.

This rebound effect could help preserve the ice sheet – if global greenhouse gas emissions are kept low.

NASA explains how land rebounds when ice sheets melts. NASA via Virtual Palaeosciences.

Another factor that can slow melting might seem counterintuitive.

While Antarctic meltwater drives rising sea levels, models show it also delays greenhouse gas-induced warming. That’s because icy meltwater from Antarctica reduces ocean surface temperatures in the Southern Hemisphere and tropical Pacific, trapping heat in the deep ocean and slowing the rise of global average air temperature.

But as melting occurs, even if it slows, sea levels rise.

Mapping our sea-level results

We combined computer models that simulate these and other behaviors of the Antarctic ice sheet, solid Earth and climate to understand what could happen to sea level around the world as global temperatures rise and ice melts.

For example, in a moderate scenario in which the world reduces greenhouse gas emissions, though not enough to keep global warming under 2 degrees Celsius (3.6 Fahrenheit) in 2100, we found the average sea-level rise from Antarctic ice melt would be about 4 inches (0.1  meters) by 2100. By 2200, it would be more than 3.3 feet (1  meter).

Keep in mind that this is only sea-level rise caused by Antarctic melt. The Greenland ice sheet and thermal expansion of seawater as the oceans warm will also raise sea levels. Current estimates suggest that total average sea-level rise – including Greenland and thermal expansion – would be 1 to 2 feet (0.32 to 0.63 meters) by 2100 under the same scenario.

Two maps of the earth showing differing sea level rise
Models show Antarctica’s contribution to sea-level rise in 2200 under medium (top) and high (bottom) emissions. The global mean sea-level rise is in purple. Regionally higher than average sea-level rise appears in dark blue.
Sadai et al., 2025

We also show how sea-level rise from Antarctica varies around the world.

In that moderate emissions scenario, we found the highest sea-level rise from Antarctic ice melt alone, up to 5 feet (1.5 meters) by 2200, occurs in the Indian, Pacific and western Atlantic ocean basins – places far from Antarctica.

These regions are home to many people in low-lying coastal areas, including residents of island nations in the Caribbean, such as Jamaica, and the central Pacific, such as the Marshall Islands, that are already experiencing detrimental impacts from rising seas.

Under a high emissions scenario, we found the average sea-level rise caused by Antarctic melting would be much higher: about 1 foot (0.3  meters) in 2100 and close to 10 feet (more than 3 meters) in 2200.

Under this scenario, a broader swath of the Pacific Ocean basin north of the equator, including Micronesia and Palau, and across the middle of the Atlantic Ocean basin would see the highest sea-level rise, up to 4.3 meters (14 feet) by 2200, just from Antarctica.

Although these sea-level rise numbers seem alarming, the world’s current emissions and recent projections suggest this very high emissions scenario is unlikely. This exercise, however, highlights the serious consequences of high emissions and underscores the importance of reducing emissions.

The takeaway

These impacts have implications for climate justice, particularly for island nations that have done little to contribute to climate change yet already experience the devastating impacts of sea-level rise.

Many island nations are already losing land to sea-level rise, and they have been leading global efforts to minimize temperature rise. Protecting these countries and other coastal areas will require reducing greenhouse gas emissions faster than nations are committing to do today.

The Conversation

Shaina Sadai has received funding from the National Science Foundation and the Hitz Family Foundation.

Ambarish Karmalkar receives funding from National Science Foundation.

ref. Sea level doesn’t rise at the same rate everywhere – we mapped where Antarctica’s ice melt would have the biggest impact – https://theconversation.com/sea-level-doesnt-rise-at-the-same-rate-everywhere-we-mapped-where-antarcticas-ice-melt-would-have-the-biggest-impact-269788

Treating love for work like a virtue can backfire on employees and teams

Source: The Conversation – USA (3) – By Mijeong Kwon, Assistant Professor of Management, Rice University

Loving your work is one thing; insisting that colleagues love it is another. Natalie McComas/Moment via Getty Images

It’s popular advice for new graduates: “Find a job you love, and you’ll never work a day in your life.” Love for one’s work, Americans are often told, is the surest route to success.

As a management professor, I can attest that there is solid research supporting this advice. In psychology, this idea is described as “intrinsic motivation” – working because you find the work itself satisfying. People who are intrinsically motivated tend to experience genuine enjoyment and curiosity in what they do, relishing opportunities to learn or master challenges for their own sake. Research has long shown that intrinsic motivation enhances performance, persistence and creativity at work.

Yet my and my co-authors’ recent research suggests that this seemingly innocent idea of loving your work can take on a moral edge. Increasingly, people seem to judge both themselves and others according to whether they are intrinsically motivated. What used to be a personal preference has, for many, become a moral imperative: You should love your work, and it is somehow wrong if you don’t.

Moralizing motivation

When a neutral preference becomes charged with moral meaning, social scientists call it “moralization.” For example, someone might initially choose vegetarianism for their own health reasons but come to view it as the right thing to do – and judge others accordingly.

The moralization of intrinsic motivation follows a similar logic. People work for many reasons: passion, duty, family, security or social status. But once intrinsic motivation becomes moralized, loving what you do is seen as not only enjoyable but virtuous. Working for money, prestige or family obligation starts to look less admirable, even suspect.

In a 2023 study, fellow business researchers Julia Lee Cunningham, Jon M. Jachimowicz and I surveyed over 1,200 employees, asking whether they thought working for personal enjoyment was virtuous.

People who did, we found, tended to believe everyone else should be intrinsically motivated, too. They were also more likely to see other motives, such as working for pay or recognition, as morally inferior. They tended to agree, for example, that “you are morally obligated to love the work itself more than you love the rewards and perks.”

These employees had internalized the idea that you work either for love or money – even though most people, in reality, do both.

Costs for you

At first glance, treating love for work as a virtue seems to offer nothing but benefits. If a job’s mission or day-to-day tasks are personally meaningful, you may persist through challenges, because quitting could feel like betraying an ideal.

But this virtue can also backfire. When intrinsic motivation becomes a moral duty rather than a joy, you may feel guilty for not constantly loving your work. Emotions that are normal in any job, such as boredom, fatigue or disengagement, can prompt feelings of moral failure and self-blame. Over time, this pressure can contribute to burnout if you stay in unsustainable roles out of guilt.

By idealizing your “dream job” when you’re applying, you may overlook security, stability and other important life needs – risking financial strain and underusing your talents. This unrealistic standard could also lead you to leave a job too soon when reality disappoints or initial passion fades.

Costs for a company

Moralizing intrinsic motivation doesn’t stop at the self; it also reshapes how we judge others. People who moralize intrinsic motivation often expect it from everyone else.

In a study of nearly 800 employees across 185 teams, we found that employees who moralized intrinsic motivation were more generous toward teammates they perceived as loving their work. However, they were less willing to help out colleagues they considered less passionate. In other words, moralizing intrinsic motivation can make employees “discerning saints” – good to some, but selectively so.

A man and woman seated at an office table high-five each other in a room whose glass walls are covered with print-outs and sticky notes.
Seeing intrinsic motivation as a virtue affects how people view colleagues, too.
Moyo Studio/E+ via Getty Images

This dynamic can create problems for work teams. Leaders who strongly moralize intrinsic motivation may adopt leadership styles aimed at igniting passion in their teams – emphasizing workers’ autonomy, for example.

While inspiring on the surface, this approach can alienate employees who work for more pragmatic reasons. Over time, I would argue, this can breed tension and conflict, as some team members are celebrated as “true believers” and others are quietly marginalized. Expressing love for one’s work becomes a kind of commodity – one more way to get ahead.

Embracing many motives

People all around the world experience intrinsic motivation. But if that feeling is universal, its moralization is not.

My current research with management researcher Laura Sonday suggests that moralizing intrinsic motivation is more pronounced in some cultures than in others. Where work is viewed as a means of service, duty or balance, rather than a source of personal fulfillment, loving one’s job may be appreciated but not treated as a moral expectation.

I would urge office leaders to recognize the double-edged nature of moralizing intrinsic motivation. Expressing genuine love for work can inspire others, but enforcing it as a moral norm can silence or shame those with different values or priorities. Leaders should be careful not to equate enthusiasm with virtue, or assume that passion always signals integrity or competence.

For employees, it may be worth reflecting on how we talk about our own motivation. Loving one’s work is wonderful, but it’s also perfectly human to value stability, recognition or family needs. In a culture where “do what you love” has become a moral commandment, remembering that it’s not the be-all, end-all reason to work may be the most moral stance of all.

The Conversation

Mijeong Kwon does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Treating love for work like a virtue can backfire on employees and teams – https://theconversation.com/treating-love-for-work-like-a-virtue-can-backfire-on-employees-and-teams-266983

A quarter of early child care educators in Colorado reported mistreatment from co-workers

Source: The Conversation – USA – By Virginia McCarthy, Assistant Professor, Department of Surgery, University of Colorado Anschutz Medical Campus

Preschool teachers lead a class in Adams County, Colo. Kathryn Scott/The Denver Post via Getty Images

Early childhood educators and staff nurture and teach children under the age of 5. At its best, this type of early care sets kids up for long-term success.

But educators who are experiencing poor mental health are less able to cultivate positive relationships with the children in their care, which negatively affects the children’s development.

“We work in a field that has a high demand for kids to be safe and enjoy learning,” one educator told us. “We have … little people that depend on us, parents depend on us, and we need to make sure that we are there for the kids when they need us.”

Our research team – led by a clinical associate professor and a research assistant professor in public health – set out to learn how child care workers were coping with all of this responsibility.

What we learned was concerning and needs to be understood by parents and policymakers alike.

Studying 42 Head Start centers

Our peer-reviewed study examined the mental health of 332 early child care educators and other staff at 42 Head Start centers in the Denver metropolitan area and southeast Colorado.

We found that roughly 25% of early child care staff in Colorado self-reported discrimination and condescending or demeaning treatment from a colleague in the past year, with 15% experiencing more than one kind.

We measured discrimination tied to age, race, ethnicity and gender. We also measured types of demeaning treatment, which included bullying, harassment and condescending behavior. And we looked at physical violence.

Higher levels of workplace mistreatment were related to greater numbers of poor mental health days. The child care staff we surveyed reported an average of seven poor mental health days in the month prior to completing the survey.

Mistreatment in early childhood education

The early child care workforce also reports higher rates of depression than the national workforce.

High stress of educators and staff even pushes some workers out of the profession.

Working conditions matter, too, with early child care workers reporting substantial physical and psychological workplace challenges, such as lifting and carrying children, as well as managing a wide range of ages and capabilities among children in the classroom.

Our survey also revealed that 1 in 4 early child care staff experienced condescending or demeaning treatment by colleagues or superiors in the past 12 months. This was the most common type of workplace mistreatment.

In early child care, teamwork and collegiality are integral and are linked to educator well-being. Mistreatment between colleagues can strain relationships, contribute to burnout and reduce the likelihood of educators stepping into leadership roles.

Books are in focus in the foreground with titles like 'The Best Mouse Cookie' and 'If you take a mouse to school.' In the blurred background is an adult woman standing and teaching to a bunch of young children sitting on the ground.
Books in a Frederick, Colo., preschool class.
Lewis Geyer/Digital First Media/Boulder Daily Camera via Getty Images

Our study found that 1 in 10 early child care staff reported discrimination at work based on race or ethnicity. Experiences of discrimination have an impact beyond mental health and also affect physical health, job attitudes and engagement in the workplace.

Younger workers are struggling

Discrimination was three times as likely to be reported by the younger workforce, ages 18-29, than older workers. Discrimination between age groups affects trust and can reduce employee engagement.

Mistreatment of early child care workers can take several forms that happen at the same time. For example, age discrimination can occur in either direction. Younger staff may be viewed by older colleagues as less experienced, less committed or less capable than more experienced colleagues. Older colleagues may be perceived to be less creative or less willing to adapt to new strategies and practices.

Yet overall, younger workers seemed to be struggling more. Workers under 35 reported an average of eight to nine poor mental health days in a 30-day window; older workers reported an average of 5.6.

Improving staff well-being

Our study indicates a need for both societal and organizational change to prevent mistreatment of early child care staff, which can improve worker well-being and lead to better care for young children.

At a societal level, it is important to acknowledge the integral role of the early child care workforce and compensate these workers at a level commensurate with their importance. In 2023, the average U.S. preschool teacher earned an annual income of $37,120 compared to the $63,680 annual income of elementary school teachers.

Adequate pay and appreciation can reduce turnover. Rates of turnover are four times higher among early child care educators than elementary school teachers.

“Turnover has a lot to do with pay, unfortunately, and we don’t get paid a whole lot of money,” one educator said. “And … I don’t think I’ve always felt valued in the position I’m in.”

At an organizational level, leaders can implement health-centered policies and offer managerial training on how to build supportive teams. Total Worker Health interventions may also help to guide needed policy changes with input from staff. These interventions are holistic programs that focus on both the safety and well-being of workers and include elements such as environmental and social supports. They are shown to improve worker well-being.

One initiative compared wellness intervention models across six Early Head Start and Head Start networks nationally to address the comprehensive well-being in staff. Direct outcomes of the programs included workplace and organizational culture improvements, as well as higher staff well-being.

We designed the WELL Program, which has successfully been implemented at five Colorado-based Head Start networks. The program includes training to promote better sleep and mindfulness.

“WELL help(ed) people keep going every day and deal with their stress in a healthy way so it didn’t come out in the classroom, or come out against kiddos that are tough,” one participant said.

Our study also suggests there may be generational differences in workplace communication and a varied understanding of what it is to be mistreated. Additional research on these differences may help us to address causes of mistreatment and find solutions.

The Conversation

Charlotte Farewell receives funding from the Administration for Children and Families.

Jini Puma receives funding from the Administration for Children and Families.

Kyla Hagan-Haynes and Virginia McCarthy do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. A quarter of early child care educators in Colorado reported mistreatment from co-workers – https://theconversation.com/a-quarter-of-early-child-care-educators-in-colorado-reported-mistreatment-from-co-workers-264666

Automated systems decide which homeless Philadelphians get housing and who stays on the street – often in ways that feel arbitrary to those waiting

Source: The Conversation – USA – By Pelle G. Tracey, Assistant Professor of Information, University of Washington

Philadelphia has thousands of homeless residents living in shelters and on the streets. Jeff Fusco/The Conversation U.S., CC BY-SA

Seeing a person huddled under a makeshift roof of tarps or curled up on a warm grate can evoke powerful emotions and questions.

How did they get here? Why doesn’t someone help them? What can I do about this?

The answers to these questions are complex. However, a significant body of research suggests that there is a highly effective solution for many individuals who experience homelessness. It is called supportive housing.

Supportive housing programs combine a housing subsidy – financial assistance that helps make housing affordable even for those with very low incomes – with wraparound supportive services that help a person remain stably housed. Supportive services often include case management, occupational therapy and mental health and addiction treatment. These programs have helped thousands of Philadelphians end their experiences of homelessness.

As a researcher and former social worker, I have spent much of the past decade working in and studying homeless services in Philadelphia. For my dissertation research, I conducted hundreds of hours of ethnographic fieldwork at a soup kitchen and outreach center in the city between 2022 and 2024. I interviewed 75 homeless services workers, volunteers and people who were experiencing or had experienced homelessness. I also analyzed hundreds of pages of policy documents.

I have found that while the city has succeeded in centralizing services to support unhoused people, there remain major bureaucratic challenges exacerbated by insufficient funding and a shortage of supportive housing. These challenges impact both people seeking supportive housing and front-line workers trying to help them.

Khalil’s story

Consider the case of Khalil, a 48-year-old from West Philly who became homeless during the pandemic. (As for all the interviewees’ names used in this article, Khalil is a pseudonym I’m using to protect his privacy.) Khalil told me that he lost his job as an IT technician at Verizon, where he had worked for nine years. Sleeping outside and unable to afford life-sustaining kidney medication, he said, his physical and mental health spiraled.

A supportive housing program changed that, providing him with a stable and affordable place to live, while social workers helped him enroll in Medicaid and connect with a community health clinic. This support, Khalil explained, allowed him to “transition back into residential living and back into employment and back into being a working member of society.”

Despite the efficacy of supportive housing, cities do not receive sufficient federal funding to provide this service to all residents who are eligible. As a result, the need for these housing programs vastly outstrips the supply.

So how do officials in Philadelphia decide who will continue to sleep on the street or in a shelter, and who can move into a supportive housing facility with a warm bed and access to valuable wraparound services?

How the city determines who gets housing

Like other localities, Philadelphia uses a Coordinated Entry System. CES is a form of automated bureaucracy that combines several different algorithms and administrative processes with the goal of helping officials and social service workers allocate resources fairly and efficiently.

CES is intended to help workers identify which people experiencing homelessness are in greatest need of aid. These systems work by combining a central pool of resources like housing programs and a central list of people seeking help. Unhoused people are scored using a vulnerability assessment tool, and those that score highest are matched to an opening in a supportive housing program.

Because most of these systems are premised on targeting resources to the most vulnerable people, defining and gauging vulnerability becomes fraught with tension. After all, vulnerability is inherently subjective, and there is no universally agreed-upon best way to measure it.

These systems will soon come under even greater pressure as the U.S. Department of Housing and Urban Development prepares to slash funding for supportive housing programs. As many as 170,000 people nationwide who were previously homeless will be at risk of returning to the streets once these funding changes are implemented.

CES has benefits and drawbacks

Coordinated entry has made real progress on several long-standing challenges for Philadelphia’s homeless services system. Chief among these is centralization.

Most resources available for people experiencing homelessness are administered by nonprofit social services organizations. Prior to CES, a person seeking assistance would separately apply to various nonprofits and put their name on multiple waiting lists.

CES centralizes resources into a common pool, accessed through the vulnerability assessment process. As one administrator with the city’s Office of Homeless Services told me, this arrangement is “immensely more supportive and fair” than the scattered process that came before. For example, individual nonprofit providers are less able to earmark resources for clients they already work with.

However, there are downsides to Philadelphia’s approach to CES.

Vulnerability assessments, like those used in Philadelphia, have been criticized for failing to capture a full picture of a person’s plight. Assessments involve asking unhoused people a series of yes or no questions about their housing, health and financial history, and generate a vulnerability score based on the responses. A person who has a relatively mild experience with several different risk factors can end up with a much higher score than a person with an extremely serious experience with just a few.

And similar to other automated assessments, such as in the criminal legal system, they have the potential to introduce racial bias into allocation outcomes.

Furthermore, the way CES works is, by design, hidden from the people it impacts most. The ambiguity is intended to prevent people from gaming the system, but it also creates confusion for those living in shelters and on the street. Some seeking aid may hide evidence of their vulnerability, such as addiction, out of fear it will disqualify them from housing. Others may amplify their vulnerability in an effort to improve their odds of receiving help.

The result is a perception among people experiencing homelessness that the system is unfair.

As Andre, a 60-year-old who had been sleeping in shelters off and on for nearly a decade, told me, a person who “goes in there and tells the absolute truth, they’re put on the back burner.”

Person sleeps on chairs with wheelchair nearby
A person sleeps on a bench in the Philadelphia International Airport.
AP Photo/Matt Rourke

‘You’ve got to have a record of being homeless’

Leon, a 25-year-old from North Philadelphia, told me as we chatted over coffee that in order to be prioritized through CES, “You’ve got to have a record of being homeless.”

But generating such a paper trail can be difficult. A city database tracks shelter stays that can serve as proof of homelessness, but not all shelters participate. And for those sleeping outside, like Leon, proof depends on regular interactions with outreach workers, which requires being in the right place at the right time.

If an unhoused person cannot prove the length of their time on the street, or provide documentation of a mental health diagnosis, they may be deprioritized through CES, even if they are highly vulnerable.

For all its advantages, CES in Philly is not designed to take into account the input of unhoused people themselves. In the words of Richie, a 32-year-old who was seeking housing for himself and his pregnant wife, “There is no voice for homeless people … because homeless people don’t have a voice.”

Despite these challenges, the city has lowered barriers to participating in CES. For example, the city has launched a pilot program involving mobile assessors who can complete assessments in different locations beyond city shelters, such as at soup kitchens, to meet unhoused people where they are.

3 ways to improve the system

Here are three concrete ways the city could reduce more of the bureaucratic hurdles to supportive housing.

First, the city could expand pathways to supportive housing through a model called multiprinciple allocation. This approach combines different methods for determining who gets housing. Some subsidies could be allocated through new vulnerability assessments that are better vetted for bias, while others are distributed based on length of homelessness or a lottery system. This could bolster fairness by ensuring that people whose vulnerability is not picked up through the assessment tool could still have a shot at aid.

Second, the city could provide opportunities for unhoused people and front-line workers to attest to vulnerability and experiences of homelessness in their own words – allowing someone to say, “I am struggling with housing for reasons that the assessment did not cover.”

And third, Philadelphia could reduce the degree of automation in the CES matching process. As things stand, people with high scores are mechanically matched to open programs, even if that program is a poor fit for the individual person. Giving staff and unhoused people more agency in making housing matches could produce better outcomes.

No amount of tinkering with CES can address the fundamental resource constraints that shape the fight against homelessness in Philadelphia. Simply put, Philadelphia lacks sufficient funding for housing the most vulnerable. But thoughtful changes to CES could make the response to homelessness more effective, compassionate and fair.

Read more of our stories about Philadelphia, or sign up for our Philadelphia newsletter on Substack.

The Conversation

Pelle G. Tracey has received funding from the Google Award for Inclusion Research.

ref. Automated systems decide which homeless Philadelphians get housing and who stays on the street – often in ways that feel arbitrary to those waiting – https://theconversation.com/automated-systems-decide-which-homeless-philadelphians-get-housing-and-who-stays-on-the-street-often-in-ways-that-feel-arbitrary-to-those-waiting-266563

Thousands of genomes reveal the wild wolf genes in most dogs’ DNA

Source: The Conversation – USA – By Audrey T. Lin, Research Associate in Anthropology, Smithsonian Institution

Modern wolves and dogs both descend from an ancient wolf population that lived alongside woolly mammoths and cave bears. Iza Lyson/500px Prime via Getty Images

Dogs were the first of any species that people domesticated, and they have been a constant part of human life for millennia. Domesticated species are the plants and animals that have evolved to live alongside humans, providing nearly all of our food and numerous other benefits. Dogs provide protection, hunting assistance, companionship, transportation and even wool for weaving blankets.

Dogs evolved from gray wolves, but scientists debate exactly where, when and how many times dogs were domesticated. Ancient DNA evidence suggests that domestication happened twice, in eastern and western Eurasia, before the groups eventually mixed. That blended population was the ancestor of all dogs living today.

Molecular clock analysis of the DNA from hundreds of modern and ancient dogs suggests they were domesticated between around 20,000 and 22,000 years ago, when large ice sheets covered much of Eurasia and North America. The first dog identified in the archaeological record is a 14,000-year-old pup found in Bonn-Oberkassel, Germany, but it can be difficult to tell based on bones whether an animal was an early domestic dog or a wild wolf.

Despite the shared history of dogs and wolves, scientists have long thought these two species rarely mated and gave birth to hybrid offspring. As an evolutionary biologist and a molecular anthropologist who study domestic plants and animals, we wanted to take a new look at whether dog-wolf hybridization has really been all that uncommon.

Little interbreeding in the wild

Dogs are not exactly descended from modern wolves. Rather, dogs and wolves living today both derive from a shared ancient wolf population that lived alongside woolly mammoths and cave bears.

In most domesticated species, there are often clear, documented patterns of gene flow between the animals that live alongside humans and their wild counterparts. Where wild and domesticated animals’ habitats overlap, they can breed with each other to produce hybrid offspring. In these cases, the genes from wild animals are folded into the genetic variation of the domesticated population.

For example, pigs were domesticated in the Near East over 10,000 years ago. But when early farmers brought them to Europe, they hybridized so frequently with local wild boar that almost all of their Near Eastern DNA was replaced. Similar patterns can be seen in the endangered wild Anatolian and Cypriot mouflon that researchers have found to have high proportions of domestic sheep DNA in their genomes. It’s more common than not to find evidence of wild and domesticated animals interbreeding through time and sharing genetic material.

That wolves and dogs wouldn’t show that typical pattern is surprising, since they live in overlapping ranges and can freely interbreed.

Dog and wolf behavior are completely different, though, with wolves generally organized around a family pack structure and dogs reliant on humans. When hybridization does occur, it tends to be when human activities – such as habitat encroachment and hunting – disrupt pack dynamics, leading female wolves to strike out on their own and breed with male dogs. People intentionally bred a few “wolf dog” hybrid types in the 20th century, but these are considered the exception.

a wolfish looking dog lies on the ground behind a metal fence
Luna Belle, a resident of the Wolf Sanctuary of Pennsylvania, which is home to both wolves and wolf dogs.
Audrey Lin

Tiny but detectable wolf ancestry

To investigate how much gene flow there really has been between dogs and wolves after domestication, we analyzed 2,693 previously published genomes, making use of massive publicly available datasets.

These included 146 ancient dogs and wolves covering about 100,000 years. We also looked at 1,872 modern dogs, including golden retrievers, chihuahuas, malamutes, basenjis and other well-known breeds, plus more unusual breeds from around the world such as the Caucasian ovcharka and Swedish vallhund.

Finally, we included genomes from about 300 “village dogs.” These are not pets but are free-living animals that are dependent on their close association with human environments.

We traced the evolutionary histories of all of these canids by looking at maternal lineages via their mitochondrial genomes and paternal lineages via their Y chromosomes. We used highly sensitive computational methods to dive into the dogs’ and wolves’ nuclear genomes – that is, the genetic material contained in their cells’ nuclei.

We found the presence of wild wolf genes in most dog genomes and the presence of dog genes in about half of wild wolf genomes. The sign of the wolf was small but it was there, in the form of tiny, almost imperceptible chunks of continuous wolf DNA in dogs’ chromosomes. About two-thirds of breed dogs in our sample had wolf genes from crossbreeding that took place roughly 800 generations ago, on average.

While our results showed that larger, working dogs – such as sled dogs and large guardian dogs that protect livestock – generally have more wolf ancestry, the patterns aren’t universal. Some massive breeds such as the St. Bernard completely lack wolf DNA, but the tiny Chihuahua retains detectable wolf ancestry at 0.2% of its genome. Terriers and scent hounds typically fall at the low end of the spectrum for wolf genes.

a dog curled up on the sidewalk in a town
A street – or free-ranging – dog in Tbilisi, Georgia.
Alexkom000/Wikimedia Commons, CC BY

We were surprised that every single village dog we tested had pieces of wolf DNA in their genomes. Why would this be the case? Village dogs are free-living animals that make up about half the world’s dogs. Their lives can be tough, with short life expectancy and high infant mortality. Village dogs are also associated with pathogenic diseases, including rabies and canine distemper, making them a public health concern.

More often than predicted by chance, the stretches of wolf DNA we found in village dog genomes contained genes related to olfactory receptors. We imagine that olfactory abilities influenced by wolf genes may have helped these free-living dogs survive in harsh, volatile environments.

The intertwining of dogs and wolves

Because dogs evolved from wolves, all of dogs’ DNA is originally wolf DNA. So when we’re talking about the small pieces of wolf DNA in dog genomes, we’re not referring to that original wolf gene pool that’s been kicking around over the past 20,000 years, but rather evidence for dogs and wolves continuing to interbreed much later in time.

A wolf-dog hybrid with one of each kind of parent would carry 50% dog and 50% wolf DNA. If that hybrid then lived and mated with dogs, its offspring would be 25% wolf, and so on, until we see only small snippets of wolf DNA present.

The situation is similar to one in human genomes: Neanderthals and humans share a common ancestor around half a million years ago. However, Neanderthals and our species, Homo sapiens, also overlapped and interbred in Eurasia as recently as a few thousand generations ago, shortly before Neanderthals disappeared. Scientists can spot the small pieces of Neanderthal DNA in most living humans in the same way we can see wolf genes within most dogs.

two small tan dogs walking on pavement on a double lead leash
Even tiny chihuahuas contain a little wolf within their doggy DNA.
Westend61 via Getty Images

Our study updates the previously held belief that hybridization between dogs and wolves is rare; interactions between these two species do have visible genetic traces. Hybridization with free-roaming dogs is considered a threat to conservation efforts of endangered wolves, including Iberian, Italian and Himalayan wolves. However, there also is evidence that dog-wolf mixing might confer genetic advantages to wolves as they adapt to environments that are increasingly shaped by humans.

Though dogs evolved as human companions, wolves have served as their genetic lifeline. When dogs encountered evolutionary challenges such as how to survive harsh climates, scavenge for food in the streets or guard livestock, it appears they’ve been able to tap into wolf ancestry as part of their evolutionary survival kit.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Thousands of genomes reveal the wild wolf genes in most dogs’ DNA – https://theconversation.com/thousands-of-genomes-reveal-the-wild-wolf-genes-in-most-dogs-dna-261897