Here’s why Canada’s parents and grandparents reunification program is problematic

Source: The Conversation – Canada – By Megan Gaucher, Associate Professor, Department of Law and Legal Studies, Carleton University

Immigration, Refugees and Citizenship Canada’s recent announcement that it’s accepting 10,000 sponsorship applications under the Parent and Grandparents Program (PGP) comes with an important caveat.

Due to persistent backlog, invitations will only be sent to the 17,860 potential sponsors who submitted an interest-to-sponsor application back in 2020.

While good news for some, it means yet another cycle of uncertainty for thousands of families who have waited years for the PGP to finally reopen.

Migrant families seek permanent reunification for reasons other than a desire to live with their parents and grandparents in the same country. Those reasons include a need for child-care support and a desire to care for their older family members as they age.

As international conventions dictate, families have a right to be together.

From permanent to temporary

Grandparents have been part of Canada’s formal “family class” pathway since 1976, but current policy favours spouses and dependent children. This makes reunification for extended family members difficult.

Grandparent admissions through the PGP have comprised around 25 per cent of total family class admissions for the past 10 years.

Unlike other family class categories, there is a predetermined cap on accepted PGP applications. The PGP has also undergone a series of program freezes to deal with an application backlog, the most recent announced in January 2025. The government’s latest update included no commitment to receive new interest-to-sponsor declarations.

As an alternative to the PGP, the government recommends the super visa, a multi-entry visa valid for up to 10 years. However, the super visa requires grandparents to reapply and meet medical inadmissibility rules every five years.

The super visa also places responsibility for financial and health care of grandparents entirely on the sponsoring children, sometimes with devastating consequences.

Most importantly, the super visa does not guarantee permanent residence upon expiration. Permanent grandparent reunification remains a lottery draw, at the mercy of sponsorship intake caps.

Celebrating, denigrating migrant grandparents

Our preliminary research on grandparent sponsorship explores how elected officials consider the place of migrant grandparents in Canadian society. We’ve so far found they regard permanent family class migration as “good for business” as it attracts economic migrants. At the same time, elected officials believe that certain dependants monopolize health and social safety nets.

Grandparents, in particular, are treated by governments as human liabilities who must be admitted “responsibly.”

Admitting grandparents to Canada is tied to their perceived ability to support their sponsors by performing unpaid domestic labour. Our research has found elected officials celebrate sponsored grandparents for the substantial unpaid care work they provide like meal preparation, child care and cleaning.

In a recent survey on grandparent sponsorship, sponsors describe the unpaid work conducted by grandparents as essential to their participation in the Canadian workforce.

an older dark-haired woman plays with a boy at a playground
Grandparents can be key to helping younger family members become active in the Canadian workforce.
(Kateryna Hliznitsova/Unsplash)

Migrant grandparents are also positioned as providers of cultural care for their grandchildren. Our research draws attention to elected officials often invoking memories of their own migrant grandparents passing along languages, practices and values that shaped their unique cultural identities.

Despite the benefits migrant grandparents provide, sponsored grandparents are consistently suspected of taking advantage of Canada’s health care and social welfare systems. This is why the super visa is promoted as an alternative pathway.

Dependent on sponsors

Grandparents who come to Canada through the super visa are financially reliant on their sponsors. Even though the government recognizes that the number of sponsored grandparents applying for old age security is relatively small, treating migrant grandparents as economic burdens allows governments to justify caps and application pauses on PGP sponsorship.

Contrary to governments’ framing of the super visa as aligning with migrants’ families demands for temporary care, our research shows that grandparents often resort to humanitarian and compassionate applications to obtain permanent residence once their super visa has expired. In these cases, their ability to perform care work is further scrutinized.

In terms of grandparent sponsorship, care is largely understood as temporary and one-directional — in other words, migrant grandparents are welcomed when they provide care, but are seen as liabilities when they need care themselves.




Read more:
Canada halts new parent immigration sponsorships, keeping families apart


Prioritizing the needs of migrant families

How do we reconcile government claims that family reunification is a “fundamental pillar of Canadian society” with the reality that permanent grandparent reunification remains difficult to obtain?

Intake announcements like the most recent one in July allow governments to celebrate permanent grandparent migration. At the same time, the inconsistency of the PGP and solutions like the super visa keep migrant grandparents in a state of legal, political and economic precarity.

With the Liberal government announcing cuts to family class admissions over the next three years, the impact of these changes on grandparent reunification warrants attention.

Rather than temporary reforms and routes, the government needs to consider structural changes to Canada’s family class pathway that focus on the needs and interests of families seeking permanent reunification.

The Conversation

Megan Gaucher receives funding from the Social Sciences and Humanities Research Council.

Asma Atique receives funding from Mitacs and the College of Immigration and Citizenship Consultants. She is affiliated with CERC Migration and Integration and volunteers for South Asian Women and Immigrants’ Services.

Ethel Tungohan receives funding from the Social Sciences and Humanities Research Council.

Harshita Yalamarty receives funding from Social Sciences and Humanities Research Council.

ref. Here’s why Canada’s parents and grandparents reunification program is problematic – https://theconversation.com/heres-why-canadas-parents-and-grandparents-reunification-program-is-problematic-262263

Bruce Springsteen’s ‘Born to Run’ still speaks to a nation vacillating between hope and despair

Source: The Conversation – USA (2) – By Louis P. Masur, Distinguished Professor of American Studies and History, Rutgers University

Bruce Springsteen performs in Atlanta on Aug. 22, 1975, during the ‘Born to Run’ tour. Tom Hill/WireImage via Getty Images

I was 18 when Bruce Springsteen’s third album, “Born to Run,” was released 50 years ago, and it couldn’t have come at a better time.

I’d just finished my freshman year in college, and I was lost. My high school girlfriend had broken up with me by letter. I had no idea what I wanted to do with my life. I was stuck back in my parents’ apartment in the Bronx.

So when I dropped the record onto my Panasonic turntable and Springsteen sang, “So you’re scared and you’re thinking/That maybe we ain’t that young anymore” on the opening track, “Thunder Road,” I felt as if he were speaking directly to me.

But no song moved me more than the album’s title track, “Born to Run.” How I longed for that sort of love – and how I also felt strangled by the “runaway American dream.” The song was about getting out, but also about searching for a companion. I, too, was a “scared and lonely rider” who craved arriving at a special place. Decades later, I combined the personal and the professional and wrote a book about the making and meaning of the album.

All eyes on the Boss

The album was shaped by the times, particularly the malaise of the post-Vietnam and post-Watergate American landscape. There was an energy crisis, and it wasn’t only oil that was in short supply.

The excitement of the 1960s had passed, and rock ’n’ roll itself was in the doldrums. Elvis had become a Las Vegas lounge act; the Beatles had broken up; Bob Dylan had been a recluse since his motorcycle accident in 1966. The No. 1 hit in 1975 was “Love Will Keep Us Together,” by the Captain and Tennille. Obituaries to rock music appeared regularly.

Springsteen went into the studio feeling the pressure to produce. His first two albums had received good reviews but sold poorly. After seeing a show in Cambridge, Massachusetts, in 1974, writer Jon Landau proclaimed Springsteen “the future of rock ’n’ roll.” Springsteen wore the label uneasily, though he had more than enough ambition to try and fulfill the prophecy: He later called “Born to Run,” “my shot at the title, a 24-year-old kid aiming at the greatest rock ’n’ roll record ever.”

But in the studio, he struggled. It took him six months to record the title song. He kept rewriting the lyrics and experimenting with different sounds. He was composing epics: “Tenth Avenue Freeze Out,” “Backstreets,” “Jungleland.” And he was trying to tie it all together thematically as his characters searched for love and connection and endured disappointment and heartbreak.

When Springsteen was finally done with the album, he hated it. He even threw a test pressing into a pool. But Landau, who had come on to co-produce, convinced him to release it.

Poetry for the masses

Despite Springsteen’s apprehension, the response to “Born to Run” was remarkable. Hundreds of thousands of copies flew off the shelves.

Springsteen appeared on the covers of Newsweek and Time, where he was hailed as “Rock’s New Sensation.” Writing in Rolling Stone, critic Greil Marcus called it “a magnificent album that pays off on every bet ever placed on him.”

There was backlash from some corners: critics who resented all the hype Springsteen had received and who thought the music bombastic. But most agreed with John Rockwell of The New York Times, who praised the album’s songs as “poetry that attains universality. … You owe it to yourself to buy this record.”

An operatic drama

The album pulsates between hope and despair. Side 1 carries listeners from the elation of “Thunder Road” to the heartbreak of “Backstreets,” and Side 2 repeats the trajectory, from the exhilaration of “Born to Run” to the anguish of “Jungleland.”

I felt I knew the characters in these songs – Mary and Wendy, Terry and Eddie – and I identified with the narrator’s struggles and dreams. They all wrestled with feeling stuck. They longed for something bigger and more exciting. But what was the price to pay for taking the leap – whether for love or the open road?

These lyrical, operatic songs about freedom and fate, triumph and tragedy, still resonate, even though today’s music is more likely to emphasize beats, samples and software than extended guitar and saxophone solos. Springsteen continues to tour, and fans young and old fill arenas and stadiums to hear him because rock ’n’ roll still has something to say, still makes you shout, still makes you feel alive.

“It’s embarrassing to want so much, and to expect so much from music,” Springsteen said in 2005, “except sometimes it happens – the Sun Sessions, Highway 61, Sgt. Peppers, the Band, Robert Johnson, Exile on Main Street, Born to Run – whoops, I meant to leave that one out.”

In fall 1975, I played “Born to Run” over and over in my dorm room. I’d stare at Eric Meola’s cover photograph of a smiling Springsteen in leather jacket and torn T-shirt, his guitar pointing out and upward as he gazes toward his companion.

Who wouldn’t want to join Springsteen and his legendary saxophonist, Clarence Clemons, on their journey?

That October, I went on a first date with a girl. We’ve been married 44 years, and the stirring declaration from “Born to Run” has proven true time and again: “love is wild, love is real.”

A saxophonist and two guitar players stand side-by-side as they perform on stage.
Saxophonist Clarence Clemons, Bruce Springsteen and guitarist Steven Van Zandt perform in the U.K. during the European leg of the ‘Born to Run’ tour.
Andrew Putler/Redferns via Getty Images

The Conversation

Louis P. Masur does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Bruce Springsteen’s ‘Born to Run’ still speaks to a nation vacillating between hope and despair – https://theconversation.com/bruce-springsteens-born-to-run-still-speaks-to-a-nation-vacillating-between-hope-and-despair-263168

‘These people do it naturally’: President Trump’s views on immigrant farmworkers reflect a long history of how farming has been idealized and practiced in America

Source: The Conversation – USA – By Doug Sackman, Professor of History, University of Puget Sound

Farmworkers harvest celery on March 9, 2024, in Yuma, Ariz. John Moore/Getty Images

The Trump administration’s mass deportation campaign has not spared the U.S. agricultural industry, with agents from Immigration and Customs Enforcement frequently raiding farms across the country in search of undocumented workers.

Now, farmers are facing a crisis the administration has helped create: not enough people to pick crops.

On a recent call to CNBC, President Donald Trump said, “We can’t let our farmers not have anybody.” To assure farmers that he had their back despite the immigration raids, he sought to distinguish immigrants he called “criminals” and “murderers” from nonthreatening farm laborers who have been picking crops for years.

To do so, Trump used an old stereotype for farmworkers: “These people do it naturally, naturally.” Trump recounted asking a farmer: “What happens if they get a bad back? He said, ‘They don’t get a bad back, sir, because if they get a bad back, they die.’”

“In many ways, they’re very, very special people,” said Trump, referring to undocumented farmworkers.

Trump is labeling some of the people his administration has targeted for deportation as naturals.

As a historian of American agriculture and labor, I think the Trump administration’s contradictions on farmworkers are part of a long history of idealizing farming in America. It’s a history in which race, nature, exploitation and the very identity of America itself have all been involved.

From Jefferson to Sunkist

Thomas Jefferson, most famous for writing the Declaration of Independence, also declared, “Those who labour in the earth are the chosen people of God.”

Jefferson thought America’s true calling was to be an agrarian nation, for virtuous and independent farmers would also be perfect citizens. But Jefferson didn’t actually get his own hands dirty. He told John Quincy Adams that he “knew nothing” about farming.

The Founding Father Alexander Hamilton, in the musical “Hamilton,” crystallized the critiques against what came to be called “Jeffersonian agrarianism,” which praises agricultural life and the virtues of farmers, but fails to acknowledge it was not the planters who did the backbreaking work: “‘We plant seeds in the South. We create.’ Yeah, keep ranting: We know who’s really doing the planting.”

The image of America built up by white farmers contrasted with a reality that “those who labour in the earth” were often enslaved people. As the cotton empire expanded, so did slavery.

Apologists for this system of inequality argued that the “natural station” of Black people was to be enslaved. Black people were portrayed as natural manual laborers – and by extension, the institution of slavery itself was defended as natural, rather than an abrogation of the “natural rights” promised to all men in the Declaration of Independence.

American agricultural leaders in the early 20th century, as I document in my book “Orange Empire,” adapted these forms of “naturalization” – the process, as developed by cultural theorists, through which man-made things such as racial hierarchies are made to appear natural.

A black and white photo shows several men picking crops in a field.
Mexican migrant workers harvest crops on a California farm in 1964.
AP Photo

In this naturalizing mode, the Los Angeles Chamber of Commerce argued in 1929 that “much of California’s agricultural labor requirements consist of those tasks to which the oriental and Mexican due to their crouching and bending habits are fully adapted, while the white is physically unable to adapt himself to them.”

As I describe in my book, the president of the citrus growers cooperative Sunkist insisted in 1944 that Mexicans “are naturally adapted to agricultural work, particularly in the handling of fruits and vegetables.”

Through this naturalization, racism appeared to be made in nature. Everything in farming – all of the food grown in what author Carey McWilliams called “factories in the field” in his 1939 exposé – was carefully constructed by farmers, their lobbyists and their advertisers to appear natural. That includes the racism and labor exploitation at the heart of it.

While naturalizing workers as evolutionarily adapted to stoop labor, this system all but denied undocumented farmworkers legal access to the other kind of naturalization: becoming full citizens.

So when anti-immigrant ideology sparks ICE raids and deportations, the nation’s farms end up losing the labor they have long relied on.

Whose homeland?

On X, the U.S. Department of Homeland Security has been presenting itself as if it’s on a mission to secure a white homeland. It has posted videos of white people enjoying America’s natural wonders to the tune of Woody Guthrie’s “This Land is Your Land” and paintings that propagandize manifest destiny, the idea that the U.S. is destined to extend its dominion across North America.

Homeland Security recently posted John Gast’s 1872 painting “American Progress” as a “Heritage to be proud of.” It depicts a luminous white goddess flying west over the American landscape, with white farmers plowing the soil beneath, while petrified Native Americans, shrouded in darkness, are being chased from their homelands.

As I and others have pointed out, Homeland Security is using coded messages to affirm white supremacists’ vision of turning America into a white homeland.

On the ground in America today, nonwhite immigrants are fleeing from immigration agents, as if the Gast painting is coming to life. The United Farm Workers union, referring to “videos of agents chasing farm workers thru the field,” says that “workers are terrorized.” One worker said they are “being hunted like animals.”

‘Grounds for dreaming’

Trump told CNBC that he does not believe that “inner city” people can come to the rescue of farmers, whose source of labor has been decimated.

As Politico reports, Trump is now floating the idea of expanding an existing visa program for temporary agricultural workers and creating a new program that requires them to leave the U.S. before reentering legally. If so, he would essentially be reinventing the Bracero Program – the U.S. guest worker program with Mexico created at the behest of California growers during World War II that lasted until the 1960s.

A black and white photos shows several men standing in front of a table as a woman sits on the other side of the table.
Mexican farmworkers in 1951 register to work in the U.S. through the Bracero Program.
PhotoQuest/Getty Images

Ian Chandler is an Oregon farmer whose cherries are rotting on the trees because he’s lost the farmworkers who normally pick them. He recently told CNN that these people “are part of our community, just like my arm is connected to my body, they are part of us. So it’s not just a matter of like cutting them off … if we lose them we lose part of who we are as well.”

The Spanish word bracero roughly translates to someone who works with their arms, but the earlier guest worker program didn’t have the same inclusive meaning Chandler intends. Instead, it racialized Mexicans as natural farmworkers, as mere brawn extracted from human beings who were otherwise excluded from the community.

As historian S. Deborah Kang notes, “Sumner Welles, former under secretary of state to President Franklin Delano Roosevelt, excoriated the ‘poisoning discriminations’ faced by bracero workers and equated their experiences with the ‘Juan Crow’ racism.”

Over the course of its history, many Americans have held out hope that the U.S. would create a farming nation that lives up to the original promise of an organic democracy – the democracy Jefferson mythologized and one where all Americans are included – built from the ground up.

As historians Camille Guerin-Gonzales and Lori Flores have shown, farmworkers, whatever their official status, have worked hard to find “grounds for dreaming” in America.

Making that American dream a reality involves seeing farmworkers for who they are, I believe: vital members of the body politic who reconnect all Americans to nature through the foods they eat.

The Conversation

Doug Sackman does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. ‘These people do it naturally’: President Trump’s views on immigrant farmworkers reflect a long history of how farming has been idealized and practiced in America – https://theconversation.com/these-people-do-it-naturally-president-trumps-views-on-immigrant-farmworkers-reflect-a-long-history-of-how-farming-has-been-idealized-and-practiced-in-america-262858

New research suggests that studying philosophy makes people better thinkers

Source: The Conversation – USA (2) – By Michael Vazquez, Teaching Assistant Professor of Philosophy, University of North Carolina at Chapel Hill

Students take a philosophy test in Strasbourg, France, on June 18, 2024. Frederick Florin/AFP via Getty Images

Philosophy majors rank higher than all other majors on verbal and logical reasoning, according to our new study published in the Journal of the American Philosophical Association. They also tend to display more intellectual virtues such as curiosity and open-mindedness.

Philosophers have long claimed that studying philosophy sharpens one’s mind. What sets philosophy apart from other fields is that it is not so much a body of knowledge as an activity – a form of inquiry. Doing philosophy involves trying to answer fundamental questions about humanity and the world we live in and subjecting proposed answers to critical scrutiny: constructing logical arguments, drawing subtle distinctions and following ideas to their ultimate – often surprising – conclusions.

It makes sense, then, that studying philosophy might make people better thinkers. But as philosophers ourselves, we wondered whether there is strong evidence for that claim.

Students who major in philosophy perform very well on tests such as the Graduate Record Examination and Law School Admission Test. Studies, including our own, have found that people who have studied philosophy are, on average, more reflective and more open-minded than those who haven’t. Yet this doesn’t necessarily show that studying philosophy makes people better thinkers. Philosophy may just attract good thinkers.

Our latest study aimed to address that problem by comparing students who majored in philosophy and those who didn’t at the end of their senior year, while adjusting for differences present at the start of their freshman year. For example, we examined students’ performance on the GRE, which they take toward the end of college, while controlling for scores on the SAT, which they take before college.

We did the same when analyzing survey data collected by the Higher Education Research Institute at the start and end of college. These surveys asked students to, for example, rate their abilities to engage with new ideas or have their own ideas challenged, and how often they explored topics raised in class on their own or evaluated the reliability of information.

All told, we looked at test and survey data from over 600,000 students. Our analysis found that philosophy majors scored higher than students in all other majors on standardized tests of verbal and logical reasoning, as well as on self-reports of good habits of mind, even after accounting for freshman-year differences. This suggests that their intellectual abilities and traits are due, in part, to what they learned in college.

Why it matters

Public trust in higher education has hit record lows in recent years, according to polling by the Lumina Foundation and Gallup. Meanwhile, the rapid advance of generative AI has threatened the perceived value of a traditional college degree, as many previously vaunted white-collar skills are at risk of being automated.

Yet now more than ever, students must learn to think clearly and critically. AI promises efficiency, but its algorithms are only as good as the people who steer them and scrutinize their output.

The stakes are more than personal. Without citizens who can reason through complex issues and discern good information from bad, democracy and civic life are at risk.

What still isn’t known

While our results point to real growth in students’ intellectual abilities and dispositions, they do not capture everything philosophers mean by “intellectual virtue.” Intellectual virtue is not just a matter of possessing certain abilities but of using those abilities well: at the right times, for the right reasons, and in the right ways.

Our measures do not tell us whether philosophy majors go on to apply their newfound abilities in the service of truth and justice or, conversely, for personal gain and glory. Settling that question would require gathering a different kind of evidence.

The Research Brief is a short take on interesting academic work.

The Conversation

The research described in this article was supported by a grant from the American Philosophical Association.

ref. New research suggests that studying philosophy makes people better thinkers – https://theconversation.com/new-research-suggests-that-studying-philosophy-makes-people-better-thinkers-262681

Why America still needs public schools

Source: The Conversation – USA (2) – By Sidney Shapiro, Professor of Law, Wake Forest University

While the White House’s fight with elite universities such as Columbia and Harvard has recently dominated the headlines, the feud overshadows the broader and more far-reaching assault on K-12 public education by the Trump administration and many states.

The Trump administration has gutted the Department of Education, imperiling efforts to protect students’ civil rights, and proposed billions in public education cuts for fiscal year 2026. Meanwhile, the administration is diverting billions of taxpayer funds into K-12 private schools. These moves build upon similar efforts by conservative states to rein in public education going back decades.

But the consequences of withdrawing from public education could be dire for the U.S. In our 2024 book, “How Government Built America,” we explore the history of public education, from Horace Mann’s “common school movement” in the early 19th century to the GI Bill in the 20th that helped millions of veterans go to college and become homeowners after World War II.

We found that public education has been essential for not only creating an educated workforce but for inculcating the United States’ fundamental values of liberty, equality, fairness and the common good.

In the public good

Opponents of public education often refer to public schools as “government schools,” a pejorative that seems intended to associate public education with “big government” – seemingly at odds with the small government preference of many Americans.

But, as we have previously explored, government has always been a significant partner with the private market system in achieving the country’s fundamental political values. Public education has been an important part of that partnership.

Education is what economists call a public good, which means it not only benefits students but the country as well.

Mann, an education reformer often dubbed the father of the American public school system, argued that universal, publicly funded, nonsectarian public schools would help sustain American political institutions, expand the economy and fend off social disorder.

an old greenish stamp has the face of a man in the center, with the words united states postage, 1 cent and Horace Mann
Horace Mann was a pioneer of free public schools and Massachusetts’ first secretary of education.
traveler1116/iStock via Getty Images

In researching Mann’s common schools and other educational history for our book, two lessons stood out to us.

One is that the U.S. investment in public education over the past 150 years has created a well-educated workforce that has fueled innovation and unparalleled prosperity.

As our book documents, for example, in the late 18th and early 19th centuries the states expanded public education to include high school to meet the increasing demand for a more educated citizenry as a result of the Industrial Revolution. And the GI Bill made it possible for returning veterans to earn college degrees or train for vocations, support young families and buy homes, farms or businesses, and it encouraged them to become more engaged citizens, making “U.S. democracy more vibrant in the middle of the twentieth century.”

The other, equally significant lesson is that the democratic and republican principals that propelled Mann’s vision of the common school have colored many Americans’ assumptions about public schooling ever since. Mann’s goal was a “virtuous republican citizenry” – that is, a citizenry educated in “good citizenship, democratic participation and societal well-being.”

Mann believed there was nothing more important than “the proper training of the rising generation,” calling it the country’s “highest earthly duty.”

Attacking public education

Today, Mann’s vision and all that’s been accomplished by public education is under threat.

Trump’s second term has supercharged efforts by conservatives over the past 75 years to control what is taught in the public schools and to replace public education with private schools.

Most notably, Trump has begun dismantling the Department of Education to devolve more policymaking to the state level. The department is responsible for, among other things, distributing federal funds to public schools, protecting students’ civil rights and supporting high-quality educational research. It has also been responsible for managing over a trillion dollars in student loans – a function that the administration is moving to the Small Business Administration, which has no experience in loan management.

The president’s March 2025 executive order has slashed the department’s staff in half, with especially deep cuts to the Office for Civil Rights, which, as noted, protects student from illegal discrimination.

Trump’s efforts to slash education funding has so far hit roadblocks with Congress and the public. The administration is aiming to cut education funding by US$12 billion for fiscal year 2026, which Congress is currently negotiating.

And contradicting its stance on ceding more control to states and local communities, the administration has also been mandating what can’t and must be taught in public schools. For example, it’s threatened funding for school districts that recognize transgender identities or teach about structural racism, white privilege and similar concepts. On the other hand, the White House is pushing the use of “patriotic” education that depicts the founding of the U.S. as “unifying, inspiring and ennobling.”

A young female teacher monitors students working on a writing lesson.
The Trump administration has been increasingly mandating what teachers can and cannot teach in their classrooms.
adamkaz/E+ via Getty Images

Promoting private education

As Trump and states have cut funding and resources to public education, they’ve been shifting more money to K-12 private schools.

Most recently, the budget bill passed by Congress in July 2025 gives taxpayers a tax credit for donations to organizations that fund private school scholarships. The credit, which unlike a deduction counts directly against how much tax someone owes, is $1,700 for individuals and double for married couples. The total cost could run into the billions, since it’s unclear how many taxpayers will take advantage.

Meanwhile, 33 states direct public money toward private schools by providing vouchers, tax credits or another form of financial assistance to parents. All together, states allocated $8.2 billion to support private school education in 2024.

Government funding of private schools diverts money away from public education and makes it more difficult for public schools to provide the quality of education that would most benefit students and the public at large. In Arizona, for example, many public schools are closing their doors permanently as a result of the state’s support for charter schools, homeschooling and private school vouchers.

That’s because public schools are funded based on how many students they have. As more students switch to private schools, there’s less money to cover teacher salaries and fixed costs such as building maintenance. Ultimately, that means fewer resources to educate the students who remain in the public school system.

Living up to aspirations

We believe the harm to the country of promoting private schools while rolling back support for public education is about more than dollars and cents.

It would mean abandoning the principle of universal, nonsectarian education for America’s children. And in so doing, Mann’s “virtuous citizenry” will be much harder to build and maintain.

America’s private market system, in which individuals are free to contract with each other with minimal government interference, has been important to building prosperity and opportunity in the U.S., as our book documents. But, as we also establish, relying on private markets to educate America’s youth makes it harder to create equal opportunity for children to learn and be economically successful, leaving the country less prosperous and more divided.

The Conversation

Sidney Shapiro is affiliated with the Center for Progressive Refrom.

Joseph P. Tomain does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why America still needs public schools – https://theconversation.com/why-america-still-needs-public-schools-260368

Hulk Hogan’s daughter can’t write herself out of the wrestler’s will – but she can refuse to take his money

Source: The Conversation – USA (2) – By Reid Kress Weisbord, Distinguished Professor of Law and Judge Norma Shapiro Scholar, Rutgers University – Newark

The outspoken wrestler attends a news conference in 2014. Dimitrios Kambouris/Getty Images

When professional wrestler and former reality TV star Hulk Hogan died on July 24, 2025, he left behind a grieving widow, two ex-wives, two children, two grandchildren he reportedly never met and a US$25 million fortune. He was 71 years old and died after having a heart attack.

News quickly broke that his daughter, entertainer Brooke Hogan, was estranged from her father, that she would “get nothing” from him and that she had arranged to have herself “taken out” of his will by asking Hogan’s financial manager to remove her name.

After her father, legally known as Terry Bollea, died, Brooke Hogan posted on Instagram. She said that they had shared a “quiet, sacred bond,” and it felt like part of her “spirit left with him.” But she also claimed that she had been “verbally and mentally abused since childhood.” She did not attend his funeral.

As law professors who research and write about trusts and estates, we teach courses about the transfer of property during people’s lifetimes and after they’ve died. We believe that the questions arising over who will inherit Hogan’s wealth offer important insights into how family estrangement can affect estate planning.

Making an unusual request

Journalists initially speculated about why Brooke “wrote herself out” of her father’s will, and earlier this month, she explained: Brooke was “scared” of the fighting that would come after her dad died. She said she was worried that there might be conflicts with her mother and her father’s third wife, Sky Daily.

But, to be clear, while anyone can ask to be left out of a will, they really don’t have any control over whether that ultimately happens.

A will is a document that spells out how money and other property should be distributed after someone dies.

The person who signs the will – technically known as the “testator” – is the only one with the power to change or revoke their own will. They don’t have to tell anyone what the will says.

Even the witnesses who sign it may not know who is left any property. They usually are not informed of the bequests made within the will, and some states don’t even require that witnesses know the document being signed is a will.

That means only Hulk Hogan could decide what his will said about who would get what once he died. Yes, Brooke Hogan could have asked him to exclude her from his will, but the final decision was up to him.

Refusing an inheritance

A will’s provisions only become final when someone dies.

Until then, the person can change their will multiple times. At death, the will can no longer be changed, but anyone who is named to receive property can refuse their inheritance. Refusing gifted or inherited property is known as a “disclaimer.”

Typically, when someone disclaims property, it goes to the next people in line – usually their children. If the deceased had no children at the time of death, their assets may go to their siblings or other relatives.

You might wonder why anyone would turn down an inheritance in the first place. One common situation arises when an heir is deep in debt. In these cases, disclaiming an inheritance can allow them to keep the money in the family.

This arrangement is legal in many states as long as the heir is not already in bankruptcy proceedings.

A hypothetical example

To understand how this might work, suppose an heir, whom we’re calling “Pat,” is left $4 million in his mother’s will. The timing is terrible for him because he’s deeply in debt, owing $5 million to creditors after the company he launched went belly up.

If Pat accepts that fortune, his creditors would be able to seize it. But he has a daughter, whom we’re calling “Marcy.” By refusing his inheritance, that $4 million can pass directly to Marcy. This arrangement is complicated but could leave Pat’s family better off because Marcy is free to spend her grandmother’s millions to pay off her college loans, buy a house and pay for her father’s rent – all without any risk of those assets being taken by his creditors.

Refusing an inheritance can also reduce the estate tax for the person who does so. The estate tax applies when people transfer lots of wealth at death. Under the tax reforms passed in July 2025, the first $15 million is exempt from the estate tax for individuals as of 2026, and twice that much for married couples. The exemption threshold is adjusted yearly for inflation.

The federal estate tax rate, which applies only to anything beyond the $15 million mark, is 40%, although many rich people take steps to reduce its impact through a handful of financial planning techniques. Many states have their own estate and inheritance taxes too.

Turning back to Pat’s mother, suppose that her estate was worth $100 million. If Pat accepts the inheritance, a 40% estate tax would apply. And if Pat leaves more than $15 million in 2026 dollars to his heirs without taking any steps to shield those assets, another 40% estate tax would be levied when he leaves his fortune to Marcy. But if Pat disclaims, then the government would only collect the estate tax once because his mom’s assets would skip him and go straight to Marcy.

Because Brooke Hogan asked to be disinherited before her father died, her request wasn’t a typical disclaimer. If she was not, in fact, included in Hulk Hogan’s will, as she requested, then there would be nothing for her to disclaim.

If Brooke Hogan is named in her father’s will and disclaims now that Hulk Hogan is dead, the most likely outcome is that her children, twins who were born in January 2025, would get their mother’s inheritance.

A few years after a reality TV show about the foibles of Hulk Hogan’s family began to air, he and his first wife got divorced.

Being estranged from close relatives

Estate disputes can become very contentious when members of a family are estranged, meaning that their relationships have soured or even broken off completely.

Although Brooke Hogan was reportedly concerned about avoiding litigation, given her estrangement from her father, it is usually the testator who takes action during the estate planning process to prevent disputes after they die.

That’s one reason why most wills – nearly 70% of them according to one study – include a “no contest” clause.

These clauses typically say something like “anyone who contests my will shall be disinherited from my estate.” Estate planners who recommend this technique believe that the penalty discourages unhappy heirs from filing lawsuits, which usually incur high attorney’s fees.

Fighting over money after a relative dies

What makes Brooke Hogan’s case unusual is that she asked to be disinherited to avoid a court battle over her father’s estate.

In many estranged families, the situation is the opposite: Heirs sue because they are disappointed by their share of the estate when they are either disinherited or given less than expected.

For Brooke Hogan, who says that she asked to be left out of her father’s will to avoid involving herself in any new family conflicts, the concern is understandable. Estate litigation can take an emotional toll by dragging grieving relatives into courtroom battles that are lengthy, expensive and make family rifts even worse.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Hulk Hogan’s daughter can’t write herself out of the wrestler’s will – but she can refuse to take his money – https://theconversation.com/hulk-hogans-daughter-cant-write-herself-out-of-the-wrestlers-will-but-she-can-refuse-to-take-his-money-262560

State Department layoffs could hurt US companies’ ability to compete globally – an economist explains why

Source: The Conversation – USA (2) – By Carey Durkin Treado, Associate Teaching Professor of Economics, University of Pittsburgh

When more than 1,300 people at the U.S. State Department lost their jobs in a mass firing this summer, most headlines focused on what it meant for American diplomacy. But the layoffs are about more than embassies and foreign policy – they could also make it harder for U.S. companies to compete in global markets.

The July layoffs – part of a sweeping Trump administration reorganization effort, with more cuts still expected – eliminated the State Department’s Business and Human Rights team, which helps American businesses avoid committing human rights abuses and violating international laws.

As an economist who studies international trade, I know that BHR is an area of growing importance for both global governance and U.S. competitiveness. In addition to being an academic, I have worked at several U.S. trade agencies and the World Bank, and in 2019-20 I served as a Franklin Fellow with the State Department’s Bureau of Democracy, Human Rights and Labor. In that role, I worked closely with the BHR team and saw how critical their expertise was in helping U.S. companies navigate shifting global human rights risks and regulations.

Losing that support puts American businesses at risk of falling behind market trends and expectations.

The rise of business and human rights policy

Global norms governing business and human rights have been evolving for more than 75 years, starting with the 1948 Universal Declaration of Human Rights. While that landmark document was geared toward governments, in 2011 the United Nations Guiding Principles on Business and Human Rights and the OECD Guidelines for Multinational Enterprises represented explicit guidance from member countries – including the United States – that companies, not just governments, are responsible for respecting human rights.

This guidance means that businesses must avoid causing or contributing to human rights abuses through their operations or supply chain relationships. Potential supply chain concerns include both “upstream impacts,” such as purchasing from suppliers that use forced labor, and “downstream impacts,” such as selling products to oppressive governments.

These sorts of risks are more common than you might think. Nearly 28 million people are in forced labor globally, making products from cotton to car parts, according to the International Labour Organization. Downstream concerns have focused recently on the sale of AI and surveillance tools to authoritarian governments such as Iran. Avoiding these abuses not only reduces business risk but also helps weaken incentives for such practices.

For decades, the State Department has taken the lead within the federal government in the task of promoting U.S. human rights policies globally. Historically, its three main responsibilities in this area have included reporting on human rights conditions at the country level, providing foreign assistance to promote human rights, and engaging in diplomatic efforts to improve human rights conditions globally.

The portfolio of the BHR team fell mainly within this third area of responsibility and included providing expertise as international policies related to business and human rights continued to expand.

The rise of human rights due diligence laws

Over the past 10 years, some of the world’s largest economies have begun to enact laws that require businesses to conduct risk analyses and publicly report on their human rights impacts. These laws – known as human rights due diligence, or HRDD, laws – have been passed or proposed in the European Union, France, the Netherlands, Germany, the United Kingdom, Australia, South Korea and Thailand.

Of particular importance is the EU Corporate Sustainability Due Diligence Directive, which was adopted by the EU in July 2024 and will begin to go into effect in 2028. Its broad scope will reshape compliance for global companies across markets, industries and supply chains.




Read more:
Many global corporations will soon have to police up and down their supply chains as EU human rights ‘due diligence’ law nears enactment


Although it is too soon to measure the full impact, many companies and industry groups have endorsed human rights due diligence laws. Industry groups and associations have published statements in support of HRDD laws, which they see as leveling the playing field for responsible business activity. A 2025 survey of 1,300 German corporate decision-makers found that most believed their country’s HRDD law gave them an edge over European competitors – and 44% said it gave them an advantage over U.S. and Chinese companies as well.

The US falls behind on sustainability

U.S.-based multinational companies are subject to the HRDD laws in the countries in which they do business. Starting in 2028, these companies will need to comply with new human rights laws if they want to participate in the EU market. Although some industry groups are in support of these laws, others, such as the U.S. Chamber of Commerce, have expressed concerns about the implementation timeline and some specific requirements.

Prior to the reorganization, the State Department worked closely with multilateral and international organizations, as well as other governments, to establish clear policy frameworks for business and human rights. By eliminating the Office of Multilateral and Global Affairs, which housed the BHR team, the reorganization of the State Department has effectively eliminated this source of expertise and support for U.S. businesses operating in global markets.

In my professional experience, which stretches back to the economic boom period of the 1990s, U.S. competitiveness depends upon a clear understanding of global markets and policies. U.S. businesses must be able to work within the regulatory framework of the countries of their suppliers, partners and customers.

In addition to government regulations, U.S. corporations face pressure from their consumers and investors, who are increasingly interested in supporting corporations that can demonstrate responsible business practices. From fair-trade coffee to environmental, social and governance investment portfolios, markets are increasingly placing value on products and businesses that can demonstrate respect for human rights. Despite political controversy and backlash, market analysts continue to predict steady growth in investor demand for ESG investment opportunities, with ESG assets on track to reach $40 trillion by 2030.

In order to best position U.S. businesses to understand and navigate the emerging role of human rights issues in global markets, the U.S. government needs expertise in these issues. By jettisoning that expertise, I believe the country risks weakening its global business position.

The Conversation

I served as a Franklin Fellow with the U.S. Department of State during the 2019-2020 academic year.

ref. State Department layoffs could hurt US companies’ ability to compete globally – an economist explains why – https://theconversation.com/state-department-layoffs-could-hurt-us-companies-ability-to-compete-globally-an-economist-explains-why-262988

Parenting strategies are shifting as neuroscience brings the developing brain into clearer focus

Source: The Conversation – USA (3) – By Nancy L. Weaver, Professor of Behavioral Science, Saint Louis University

Grocery stores are a common source of tantrums and meltdowns. Cavan Images/Cavan via Getty Images

A friend offhandedly told me recently, “It’s so easy to get my daughter to behave after her birthday – there are so many new toys to take away when she’s bad!”

While there is certainly an appeal to such a powerful parenting hack, the truth is that there’s a pretty big downside to parenting with punishments.

For about the past two decades, scientists have been discovering more and more about the growing brain. This exploration of neurobiology has led to new types of trauma treatments, a deeper understanding of the nervous system and an appreciation of how environmental and genetic factors interact to shape a child’s behavior.

As the science has become increasingly actionable, more evidence-based strategies are spilling into parenting and educational programs. Research offers some useful guideposts for how parents and caregivers can change our adult ways to foster healthy child development.

It turns out that many old-school parenting and educational approaches based on outdated behavioral models are not effective, nor are they best-practice, particularly for the most vulnerable children.

Why old-school methods fall short

I don’t come to this view lightly. I’m a behavioral scientist and a professor of public health with degrees in mathematics and biostatistics. When my children were little, I read all the parenting books and applied a somewhat academic strategy to my job of parenting. I firmly endorsed conventional recommendations from authors and pediatricians: I dutifully sent my children to their rooms to think about their choices and dug in my heels to enforce consequences.

It wasn’t until my children reached middle school and high school ages that I began to see what my approach to discipline was costing us.

Parents and educators have long espoused principles gleaned from experiments by the 20th-century researcher B.F. Skinner, a behavioral psychologist who studied how rewards and punishments could change the behavior of rats, resulting in the classic carrot and stick, reward and discipline strategies. Simply put, rats that behaved the way the researchers wanted – by pressing a lever – were given a treat, and rats that did not were given a light shock.

These midcentury, rat-based experiments shaped a parenting approach that caught on in American culture and quickly became dogma. Generations of parents learned to use rewards such as sticker charts, trinkets or toys, or an extra bedtime story to reinforce the behaviors they hoped to see more of, and to use negative reinforcement such as timeouts and loss of privileges to reduce unwanted behaviors.

But beginning in the early 2000s, many high-profile authors began to theorize that these strategies were not only ineffective but also potentially harmful.

Black and white photo of B.F. Skinner at a lab desk.
B.F. Skinner primarily studied rats and pigeons to see how animals learn and modify their behavior in response to different stimuli and consequences.
Bettmann/Getty Images

The neuroscience of child behavior

We all have a built-in nervous system response that prepares us for “fight or flight” when we feel that our safety is threatened. When we sense danger for whatever reason, our heart beats faster, our palms sweat and our focus narrows. In these situations, our prefrontal cortex – the part of the brain responsible for rational decision-making and reasoning – is decommissioned while our body prepares to fend off the threat. It’s not until our threat response subsides that we can begin to think more clearly with our prefrontal cortex. This is particularly true for kids.

Unlike adults who have usually acquired some ability to regulate their nervous system states, a child has both an immature nervous system and an underdeveloped prefrontal cortex. A child may hit his friend with a toy truck because he’s unable to manage the scary feelings of being left out of the kickball game. He likely knows better, but in the face of this threat his survival brain responds with a “fight” response, and reasoning shuts down as his prefrontal cortex takes awhile to get “back online.” Because he is not yet able to verbalize his needs, caregivers need to interpret those needs by observing the behavior.

After coregulating with a calm adult – essentially syncing up with their nervous system – a young child is able to return to a calm state and then process any learning. Efforts to change a child’s behavior in a moment of stress, including by punishments and timeouts, miss an opportunity for developing emotional regulation skills and often prolong the distress.

The behaviorist models just don’t work very well for children. The growing understanding of children’s developing brains makes clear that punishing a child for a temper tantrum or for “misbehaving” by grabbing a toy from a classmate makes no more sense than lecturing a man in cardiac arrest about eating less sugar.

A father consoles his young daughter as she cries.
Neuroscience-informed parenting is more effective than traditional reprimands and builds trust, connection and emotional regulation.
Halfpoint Images/Moment via Getty Images

Curiosity is the key to connection

Scientists and parenting experts have come a long way toward understanding how brain science can inform child-raising.

While researchers may not all agree on the most effective parenting style, there is general agreement that showing curiosity about kids’ feelings, behaviors, reactions and choices can help to guide parents’ approach during stressful times. Understanding more about why a child didn’t complete their math sheet, or why a toddler threw sand at their cousin, can support real learning.

Attuning with our children by understanding their nervous system responses helps kids feel a sense of safety, which then allows them to absorb feedback. Children who feel this connection and build these skills are much less likely to throw trucks.

For instance, when your child fusses for candy in the checkout line at the grocery store, instead of taking away the afternoon trip to the park, try this instead:

  • Stay grounded. A deep breath and a pause signals to your own nervous system to be calmer, which allows you to coregulate with a fussing child.

  • Be available. Staying close gives your child the support they need to weather the difficult emotion. Validating a child’s experience can go a long way toward helping them reset to a more regulated state.

  • Hold a boundary. By not giving in to the candy purchase, you help your child practice how to handle the emotion of anger and disappointment – called “distress tolerance” – with your support.

  • Reflect on the circumstances. After everyone is calmer, you can talk about that experience and also notice the circumstances. Was your child hungry or tired, or perhaps upset about something from their day?

Parenting with the understanding of a child’s developing brain is much more effective in shaping children’s behavior and paves the way for emotional growth for everyone, as well as stronger parent-child relationships, which are enormously protective.

And that definitely feels better than taking away their birthday presents.

The Conversation

Nancy L. Weaver, PhD, MPH is the Founder and CEO of Support Over Silence, LLC and a Professor of Public Health at Saint Louis University. She has received funding from the NIH and the CDC among other agencies.

ref. Parenting strategies are shifting as neuroscience brings the developing brain into clearer focus – https://theconversation.com/parenting-strategies-are-shifting-as-neuroscience-brings-the-developing-brain-into-clearer-focus-254975

No end to the violence as Israel launches its assault on Gaza City

Source: The Conversation – UK – By Julie M. Norman, Senior Associate Fellow on the Middle East at RUSI; Associate Professor in Politics & International Relations, UCL

In Gaza City, Palestinians are fleeing a renewed Israeli [assault] to take control over the area, following days of air strikes that have killed dozens. Just days earlier in Cairo, Hamas officials announced their acceptance of a ceasefire proposal following negotiations with Qatari and Egyptian mediators – a deal now probably derailed by the assault. And across Israel, hundreds of thousands of Israelis demonstrated against Benjamin Netanyahu’s handling of the war, demanding an end to fighting and the return of hostages.

It may be tempting to view Hamas’s announcement, combined with the protests, as potential turning points. But for many in the region, and with Israel beginning a new ground offensive in Gaza, this week’s headlines look all-too familiar.

Gaza City has been pummelled repeatedly throughout the 22-month war. Hamas has initially responded positively to various ceasefire proposals over the past year that have then broken down in negotiations. And Israelis turned out for massive protests nearly a year ago against the government’s failure to reach a ceasefire-for-hostages deal. Weekly protests have continued since in both Tel Aviv and Jerusalem, to no avail.

Indeed, after spending the past month in the region, I find it hard to envisage an end to the violence any time soon. As one Israeli reservist told me: “Last year at this time, I didn’t imagine there could possibly be another year of war. Now, it’s hard to imagine there not still being a war in another year from now.” So where do things go from here?

Even before Israel’s renewed offensive, a ceasefire deal looked highly unlikely. This is despite the fact that the proposal accepted by Hamas is reportedly “98% similar” to the US-backed phased plan from July. This called for a 60-day truce, which would see about half of the hostages released while the two sides negotiate a lasting ceasefire. Hamas has also reportedly eased its demands regarding two of the major sticking points from the summer’s negotiations, namely the number of Palestinian prisoners serving life sentences to be released as part of the deal (reduced from from 200 to 150), and the size of an Israeli buffer zone along the Gaza border (increased from 800 metres to one kilometre).

But the Israeli government has said it is no longer interested in a partial or phased deal, only a comprehensive agreement that would see all the hostages freed. While Netanyahu has not formally ruled out the current offer, various members of his governing coalition have already rejected it.

Israel and Hamas remain far apart regarding what “ending the war” actually means. Hamas has long maintained that an end to the war means the withdrawal of Israeli troops from Gaza and a guarantee that any truce be permanent. Meanwhile, Israel’s security cabinet has approved a five-point plan for ending the war that, along with the return of the hostages, includes disarming Hamas, demilitarising Gaza, and taking security control of the Strip, as well as establishing “an alternative civil administration that is neither Hamas nor the Palestinian Authority”.

Aside from the hostage release, all of these points present major challenges, especially disarming Hamas and “security control”. Given Hamas’s depleted state, some argue that Hamas might be willing to decommission weapons as part of a negotiated disarmament, demobilisation and reintegration process, similar to the IRA in Northern Ireland or the Farc in Colombia. But this would require disarmament happening in the context of a broader long-term political agreement.

This was part of the logic behind a July declaration endorsed by all Arab League states, calling on Hamas to disarm to open up a pathway for a Palestinian state. But, given that the Netanyahu government has rejected any negotiations towards a two-state solution, Hamas’s leadership in Gaza is not likely to disarm if it is seen purely as surrendering.

Israel’s intention to maintain “security control” in Gaza arguably represents an even greater impasse to reaching a ceasefire. This is not a new position. Netanyahu articulated a plan for security control in February 2024, and has spoken openly of reoccupying Gaza since May 2025.

The government has also discussed plans to annex parts of Gaza, and continues to explore options for “resettling” Gazans to third countries – a move that would amount to forcible transfer under international law. And as the military moves forward this week with plans to retake Gaza City, all signs are pointing to a long-term or permanent Israeli presence inside Gaza.

Israeli opposition, Hamas division

These moves are happening on the backdrop of growing public wariness in Israel, where polls show more than 70% of Israelis supporting a negotiated end to the war to free the hostages. Furthermore, many view the plans to retake Gaza City as both endangering the remaining hostages in the short term and creating new security problems for Israel in the long term, as well as keeping thousands of reservists deployed.

In addition to this past week’s protests, a group of more than 600 Israeli security and intelligence officials wrote a letter earlier this month stating that Hamas no longer poses a strategic threat to Israel, and calling for an end to the war. Notably, the letter was sent to the US president, Donald Trump, whom most officials I spoke with agreed is the only person with the leverage to nudge Netanyahu towards a ceasefire.

Identifying external leverage for Hamas is equally difficult. There have long been internal rifts within Hamas, especially between the so-called pragmatists and ideologues. These internal divisions have multiplied over the course of the war as the group struggles to maintain a coherent vision amid the Israeli assassinations of most of its leadership and the weakening of its regional backers, Iran and Hezbollah.

As such, even when Qatari and Egyptian mediators manage to extract concessions from Hamas negotiators, they are often rebuffed by leaders and operatives in Gaza, where the group views mere survival as a form of victory. Indeed, even though Hamas’s military capabilities have been largely depleted, they maintain the capacity to sustain a long campaign of guerrilla warfare.

As both Netanyahu and Hamas prolong the war for their own survival, they appear to be locked in a mutually destructive cycle. But it’s Gaza’s civilians and the Israeli hostages who continue to bear the consequences.

The Conversation

Julie M. Norman does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. No end to the violence as Israel launches its assault on Gaza City – https://theconversation.com/no-end-to-the-violence-as-israel-launches-its-assault-on-gaza-city-263463

How the racist study of skulls gripped Victorian Britain’s scientists

Source: The Conversation – UK – By Elise Smith, Associate Professor in the History of Medicine, University of Warwick

Illustration of a skull, viewed from the left side, showing the principal craniometric points. From Gerrish’s Text-book of Anatomy (1902) Frederick Henry Gerrish (1845-1920), Public domain, via Wikimedia Commons

The recent publication of the University of Edinburgh’s Review of Race and History has drawn attention to its “skull room”: a collection of 1,500 human craniums procured for study in the 19th century.

Craniometry, the study of skull measurements, was widely taught in medical schools across Britain, Europe, and the United States in the 19th and early 20th centuries.

Today, the harmful and racist foundations of craniometry have been discredited. It’s long been proven that the size and shape of the head have no bearing on mental and behavioural traits in either individuals or groups.

In the 19th and early 20th century, however, thousands of skulls were amassed to enable research and instruction in scientific racism. Edinburgh’s skull room is by no means unique.

Unlike phrenology, a popular theory which linked personality traits to bumps on the head, craniometry enjoyed widespread scientific support in the 19th century because it revolved around data collection and statistics.

Craniometrists measured skulls and averaged the results for different population groups. This data was used to classify people into races based on the size and shape of the head. Craniometrical evidence was used to explain why some peoples were supposedly more civilised and evolved than others.

The vast accumulation of data drawn from skulls appealed to Victorian scientists who believed in the objectivity of numbers. It equally helped to validate racial prejudice by suggesting that differences among peoples were innate and biologically determined.

Medical history

The study of skulls was central to the development of 19th-century anthropology. But before anthropology was taught at British universities, markers of supposed racial difference were studied by anatomists skilled in identifying minute differences in skeletons. The study of skulls entered the university curriculum through medical schools, and particularly through anatomy departments.

For example, when Alexander Macalister was appointed as professor of anatomy at Cambridge in 1884, some of his first lectures were on “The Race Types of the Human Skull.”

Macalister’s annual report for 1892 in the Cambridge University Reporter describes how he had increased Cambridge’s cranial holdings from 55 to 1,402 specimens. In 1899, he reported the donation of more than 1,000 ancient Egyptian craniums from the archaeologist Flinders Petrie. Much of Macalister’s skull collection remains housed in the university’s Duckworth Laboratory, which was established in 1945.

As the prestige of craniometrical research increased, institutions had to compete for cranial collections as they went on the market. Statistical accuracy depended on vast series of craniums being measured to produce representative “types”. This created an increased demand for human remains.

In 1880, the Royal College of Surgeons purchased 1,539 skulls from the private collection of Joseph Barnard Davis. This was added to their existing cache of 1,018 craniums to create Britain’s largest craniological collection. This collection was largely destroyed in 1941 when the college building was bombed during world war two. The remaining skulls are no longer held by the Royal College of Surgeons.

Oxford’s University Museum of Natural History included rows of crania in their anatomical displays in the 19th century, as did the University of Manchester’s medical school (the medical school is no longer on the same site). This investment in skulls ensured that racial researchers had enough material to study and use in their teaching.

Catalogues kept by universities in the 19th and early 20th centuries reveal not only the size of their skull collections, but also the origin of individual specimens.

Historical trauma

Some medical schools, such as Edinburgh’s, repurposed skulls procured by phrenological societies earlier in the century to enhance their holdings. Others, including Oxford’s, made use of skulls unearthed by archaeologists to conduct racial research into the country’s past. This research attempted to trace the movements of Celts, Normans, Saxons, and Scandinavians across the British Isles.

Yet because craniologists wanted to capture the full extent of racial variation, skulls from abroad were especially prized. Medical graduates of British universities posted to the colonies sent foreign bones to their old professors.

In research for my forthcoming book on skull collections, I’ve found that Cambridge’s cranial register includes a skull sent from a former student stationed in India. He had plucked it from a cremation site in Bombay despite the outrage of gathered mourners. Brazen grave-robbing and colonial violence were central to the international network that furnished British universities’ skull rooms.

The racist ideology that spurred the collection of skulls 150 years ago has been completely discredited. However, some anthropologists believe these bones may still shed light on human origins, relations and migrations.

Yet ethical factors now equally shape institutional policies towards human remains. The Pitt Rivers Museum in Oxford took its infamous “shrunken heads” off display in 2020.

Increasingly, universities and museums have confronted the historic injustices and inter-generational trauma perpetuated by their retention of human remains. Since the 1970s, Indigenous groups from around the world have launched campaigns to repatriate their ancestors’ bones. Research institutions have become increasingly responsive to these requests.

In London, the Museum of the Royal College of Surgeons no longer displays the skeleton of Charles Byrne, the so-called “Irish Giant”. Byrne had explicitly denied consent for his remains to be dissected and mounted before he died in 1783.

The skulls in British universities are a testament to a vast theft of human remains from almost every territory on earth. Yet they have the potential to become powerful symbols of reconciliation if their discriminatory histories are acknowledged, and remedied through their return.

A spokesperson for the Duckworth Laboratory, University of Cambridge, said:

“We, like many institutions in the UK, are dealing with the legacies and past unethical practice in assembling the collections in our care. The Duckworth Collection and the Department of Archaeology are dedicated to fostering an open dialogue and building robust relationships with traditional communities and other stakeholders. This commitment is seen as an integral part of a continuous, reciprocal exchange of knowledge, perspectives, and cultural values. The aim is not only to address past inequities but also to enrich contemporary academic and cultural understanding through a respectful and equal partnership. In this vein, the Duckworth Collection is actively expanding its work with archival documentation and improving our records and database. In essence, the Duckworth Laboratory’s approach to repatriation and community engagement is marked by a commitment to openness, inclusivity, and a recognition of the need for an ongoing dialogue.”

The Conversation

Elise Smith does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How the racist study of skulls gripped Victorian Britain’s scientists – https://theconversation.com/how-the-racist-study-of-skulls-gripped-victorian-britains-scientists-262280