Air traffic controller shortages in Newark and other airports partly reflect long, intense training − but university-based training programs are becoming part of the solution

Source: The Conversation – USA (2) – By Melanie Dickman, Lecturer in Aviation Studies, The Ohio State University

Air traffic controllers observe a plane taking off from San Francisco International Airport in 2017. AP Photo/Jeff Chiu

Air traffic controllers have been in the news a lot lately.

A spate of airplane crashes and near misses have highlighted the ongoing shortage of air traffic workers, leading more Americans to question the safety of air travel.

The shortage, as well as aging computer systems, have also led to massive flight disruptions at airports across the country, particularly at Newark Liberty International Airport. The staffing shortage is also likely at the center of an investigation of a deadly crash between a commercial plane and an Army helicopter over Washington, D.C., in January 2025.

One reason for the air traffic controller shortage relates to the demands of the job: The training to become a controller is extremely intense, and the Federal Aviation Administration wants only highly qualified personnel to fill those seats, which has made it difficult for what has been the sole training center in the U.S., located in Oklahoma City, to churn out enough qualified graduates each year.

As scholars who study and teach tomorrow’s aviation professionals, we are working to be part of the solution. Our program at Ohio State University is applying to join over two dozen other schools in an effort to train air traffic controllers and help alleviate the shortage.

Air traffic controller school

Air traffic control training today – overseen by the Federal Aviation Administration – remains as intense as it’s ever been.

In fact, about 30% of students fail to make it from their first day of training at the FAA Academy in Oklahoma City to the status of a certified professional air traffic controller. The academy currently trains the majority of the air traffic controllers in the U.S.

Before someone is accepted into the training program, they must meet several qualifications. That includes being a U.S. citizen under the age of 31 and speaking English clearly enough to be understood over the radio. The low recruitment age is because controllers currently have a mandatory retirement age of 56 – with some exceptions – and the FAA wants them to work for at least 25 years in the job.

They must also pass a medical exam and security investigation. And they must pass the air traffic controller specialists skills assessment battery, which measures an applicant’s spatial awareness and decision-making abilities.

Candidates, additionally, must have three years of general work experience, or a combination of postsecondary education and work experience totaling at least three years.

This alone is no easy feat. Fewer than 10% of applicants meet those initial requirements and are accepted into training.

a man sits in silhouette at a panel of computer screens as he looks out of the large windows onto a runway
An air traffic controller monitors a runway in the tower at John F. Kennedy International Airport in New York.
AP Photo/Seth Wenig

Intense training

Once applicants meet the initial qualifications, they begin a strenuous training process.

This begins with several weeks of classroom instruction and several months of simulator training. There are several types of simulators, and a student is assigned to a simulator based on the type of facility for which they will be hired – which depends on a trainee’s preference and where controllers are needed.

There are two main types of air traffic facilities: control towers and radar. Anyone who has flown on a plane has likely seen a control tower near the runways, with 360 degrees of tall glass windows to monitor the skies nearby. Controllers there mainly look outside to direct aircraft but also use radar to monitor the airspace and assist aircraft in taking off and landing safely.

Radar facilities, on the other hand, monitor aircraft solely through the use of information depicted on a screen. This includes aircraft flying just outside the vicinity of a major airport or when they’re at higher altitudes and crisscrossing the skies above the U.S. The controllers ensure they don’t fly too close to one another as they follow their flight paths between airports.

If the candidates make it through the first stage, which takes about six months and extensive testing to meet standards, they will be sent to their respective facilities.

Once there, they again go to the classroom, learning the details of the airspace they will be working in. There are more assessments and chances to “wash out” and have to leave the program.

Finally, the candidates are paired with an experienced controller who conducts on-the-job training to control real aircraft. This process may take an additional year or more. It depends on the complexity of the airspace and the amount of aircraft traffic at the site.

Two towers with big glass windows stand over airplanes sitting on the tarmac in this foggy scene
Two control towers watch over Newark Liberty International Airport, where a shortage of air traffic controllers has led to blackouts and other problems lately.
AP Photo/Seth Wenig

Increasing the employment pipeline

But no matter how good the training is, if there aren’t enough graduates, that’s a problem for managing the increasingly crowded skies.

The FAA is currently facing a deficit of about 3,000 controllers and has unveiled a plan in May 2025 to increase hiring and boost retention. In addition, Congress is mulling spending billions of dollars to update the FAA’s aging systems and hire more air traffic controllers.

Other plans include paying retention bonuses and allowing more controllers to work beyond the age of 56. That retirement age was put in place in the 1970s on the assumption that cognition for most people begins to decline around then, although research shows that age alone is not necessarily a predictor of cognitive abilities.

But we believe that aviation programs and universities can play an important role fixing the shortage by providing FAA Academy-level training.

Currently, 32 universities including the Florida Institute of Technology and Arizona State University partner with the FAA in its collegiate training initiative to provide basic air traffic control training, which gives graduates automatic entry into the FAA Academy and allows them to skip five weeks of coursework.

The institution where we work, Ohio State University, is currently working on becoming the 33rd this summer and plans to offer an undergraduate major in aviation with specialization in air traffic control.

This helps, but an enhanced version of this program, announced in October 2024, allows graduates of a select few of those universities to skip the FAA Academy altogether and go straight to a control tower or radar facility once they’ve passed all the extensive tests. These schools must match or exceed the level of rigor in their training with the FAA Academy itself.

At the end of the program, students are required to pass an evaluation by an FAA-approved evaluator to ensure that the student graduating from the program meets the same standards as all FAA Academy graduates and is prepared to go to their assigned facility for further training. So far, five schools, such as the University of North Dakota, have joined this program and are currently training air traffic controllers. We intend to join this group in the near future.

Allowing colleges and universities to start the training process while students are still in school should accelerate the pace at which new controllers enter the workforce, alleviate the shortage and make the skies over the U.S. as safe as they can be.

The Conversation

Melanie Dickman is a member at large of the Air Traffic Controllers Association

Brian Strzempkowski does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Air traffic controller shortages in Newark and other airports partly reflect long, intense training − but university-based training programs are becoming part of the solution – https://theconversation.com/air-traffic-controller-shortages-in-newark-and-other-airports-partly-reflect-long-intense-training-but-university-based-training-programs-are-becoming-part-of-the-solution-249715

Texas’ annual reading test adjusted its difficulty every year, masking whether students are improving

Source: The Conversation – USA (2) – By Jeanne Sinclair, Assistant Professor, Faculty of Education, Memorial University of Newfoundland

Millions of Americans take high-stakes exams every year. Caiaimage/Chris Ryan/iStock via Getty Images

Texas children’s performance on an annual reading test was basically flat from 2012 to 2021, even as the state spent billions of additional dollars on K-12 education.

I recently did a peer-reviewed deep dive into the test design documentation to figure out why the reported results weren’t showing improvement. I found the flat scores were at least in part by design. According to policies buried in the documentation, the agency administering the tests adjusted their difficulty level every year. As a result, roughly the same share of students failed the test over that decade regardless of how objectively better they performed relative to previous years.

From 2008 to 2014, I was a bilingual teacher in Texas. Most of my students’ families hailed from Mexico and Central America and were learning English as a new language. I loved seeing my students’ progress.

Yet, no matter how much they learned, many failed the end-of-year tests in reading, writing and math. My hunch was that these tests were unfair, but I could not explain why. This, among other things, prompted me to pursue a Ph.D. in education to better understand large-scale educational assessment.

Ten years later, in 2024, I completed a detailed exploration of Texas’s exam, currently known as the State of Texas Assessments of Academic Readiness, or STAAR. I found an unexpected trend: The share of students who correctly answered each test question was extraordinarily steady across years. Where we would expect to see fluctuation from year to year, performance instead appears artificially flat.

The STAAR’s technical documents suggest that the test is designed much like a norm-referenced test – that is, assessing students relative to their peers, rather than if they meet a fixed standard. In other words, a norm-referenced test cannot tell us if students meet key, fixed criteria or grade-level standards set by the state.

In addition, norm-referenced tests are designed so that a certain share of students always fail, because success is gauged by one’s position on the “bell curve” in relation to other students. Following this logic, STAAR developers use practices like omitting easier questions and adjusting scores to cancel out gains due to better teaching.

Ultimately, the STAAR tests over this time frame – taken by students every year from grade 3 to grade 8 in language arts and math, and less frequently in science and social studies – were not designed to show improvement. Since the test seems designed to keep scores flat, it’s impossible to know for sure if a lack of expected learning gains following big increases in per-student spending was because the extra funds failed to improve teaching and learning, or simply because the test hid the improvements.

Why it matters

Ever since the federal education policy known as No Child Left Behind went into effect in 2002 and tied students’ test performance to rewards and sanctions for schools, achievement testing has been a primary driver of public education in the United States.

Texas’ educational accountability system has been in place since 1980, and it is well known in the state that the stakes and difficulty of Texas’ academic readiness tests increase with each new version, which typically come out every five to 10 years. What the Texas public may not know is that the tests have been adjusted each and every year – at the expense of really knowing who should “pass” or “fail.”

The test’s design affects not just students but also schools and communities. High-stakes test scores determine school resources, the state’s takeover of school districts and accreditation of teacher education programs. Home values are even driven by local schools’ performance on high-stakes tests.

Students who are marginalized by racism, poverty or language have historically tended to underperform on standardized tests. I believe STAAR’s design makes this problem worse.

On May 28, 2025, the Texas Senate passed a bill that would eliminate the STAAR test and replace it with a different, shorter test or a norm-referenced test. As best as I can tell, this wouldn’t address the problems I uncovered in my research.

What still isn’t known

I plan to investigate if other states or the federal government use similarly designed tests to evaluate students.

My deep dive into Texas’ test focused on STAAR before its 2022 redevelopment. The latest iteration has changed the test format and question types, but there appears to be little change to the way the test is scored. Without substantive revisions to the scoring calculations “under the hood” of the STAAR test, it is likely Texas will continue to see flat performance.

This article was updated on May 31, 2025, to clarify some language and the type of data used in the chart, replace a link and add a comment from the Texas Education Agency.

The Texas Education Agency responded to a pre-publication request for comment after the piece was published. A spokesman refuted several of the scholar’s research conclusions, including that it behaved like a norm-referenced test. However, the scholar stands by them.

The Research Brief is a short take on interesting academic work.

The Conversation

Jeanne Sinclair receives funding from the Social Science and Humanities Research Council (SSHRC) of Canada.

ref. Texas’ annual reading test adjusted its difficulty every year, masking whether students are improving – https://theconversation.com/texas-annual-reading-test-adjusted-its-difficulty-every-year-masking-whether-students-are-improving-244159

Our trans health study was terminated by the government – the effects of abrupt NIH grant cuts ripple across science and society

Source: The Conversation – USA (2) – By Jae A. Puckett, Associate Professor of Psychology, Michigan State University

Funding cuts to trans health research are part of the Trump administration’s broader efforts to medically and legally restrict trans rights. AP Photo/Lindsey Wasson

Given the Trump administration’s systematic attempts to medically and legally disenfranchise trans people, and its abrupt termination of grants focused on LGBTQ+ health, we can’t say that the notice of termination we received regarding our federally funded research on transgender and nonbinary people’s health was unexpected.

As researchers who study the experiences of trans and nonbinary people, we have collectively dedicated nearly 50 years of our scientific careers to developing ways to address the health disparities negatively affecting these communities. The National Institutes of Health had placed a call for projects on this topic, and we had successfully applied for their support for our four-year study on resilience in trans communities.

However, our project on trans health became one of the hundreds of grants that have been terminated on ideological grounds. The termination notice stated that the grant no longer fit agency priorities and claimed that this work was not based on scientific research.

Screenshot of email
Termination notice sent to the authors from the National Institutes of Health.
Jae A. Puckett and Paz Galupo, CC BY-ND

These grant terminations undermine decades of science on gender diversity by dismissing research findings and purging data. During Trump’s current term, the NIH’s Sexual and Gender Minority Research Office was dismantled, references to LGBTQ+ people were removed from health-related websites, and datasets were removed from public access.

The effects of ending research on trans health ripple throughout the scientific community, the communities served by this work and the U.S. economy.

Studying resilience

Research focused on the mental health of trans and nonbinary people has grown substantially in recent years. Over time, this work has expanded beyond understanding the hardships these communities face to also study their resilience and positive life experiences.

Resilience is often understood as an ability to bounce back from challenges. For trans and nonbinary people experiencing gender-based stigma and discrimination, resilience can take several forms. This might look like simply continuing to survive in a transphobic climate, or it might take the form of being a role model for other trans and nonbinary people.

As a result of gender-based stigma and discrimination, trans and nonbinary people experience a range of health disparities, from elevated rates of psychological distress to heightened risk for chronic health conditions and poor physical health. In the face of these challenges and growing anti-trans legislation in the U.S., we believe that studying resilience in these communities can provide insights into how to offset the harms of these stresses.

Studies show anti-trans legislation is harming the mental health of LGBTQ+ youth.

With the support of the NIH, we began our work in earnest in 2022. The project was built on many years of research from our teams preceding the grant. From the beginning, we collaborated with trans and nonbinary community members to ensure our research would be attuned to the needs of the community.

At the time our grant was terminated, we were nearing completion of Year 3 of our four-year project. We had collected data from over 600 trans and nonbinary participants across the U.S. and started to follow their progress over time. We had developed a new way to measure resilience among trans and nonbinary people and were about to publish a second measure specifically tailored to people of color.

The termination of our grant and others like it harms our immediate research team, the communities we worked with and the field more broadly.

Loss of scientific workforce

For many researchers in trans health, the losses from these cuts go beyond employment.

Our project had served as a training opportunity for the students and early career professionals involved in the study, providing them with the research experience and mentorship necessary to advance their careers. But with the termination of our funding, two full-time researchers and at least three students will lose their positions. The three lead scientists have lost parts of their salaries and dedicated research time.

These NIH cuts will likely result in the loss of much of the next generation of trans researchers and the contributions they would have made to science and society. Our team and other labs in similar situations will be less likely to work with graduate students due to a lack of available funding to pay and support them. This changes the landscape for future scientists, as it means there will be fewer opportunities for individuals interested in these areas of research to enter graduate training programs.

Building with Harvard insignia banners hanging between pillars, a student in a cap and gown walking past
The Trump administration has directly penalized universities across the country for ‘ideological overreach.’
Zhu Ziyu/VCG via Getty Images

As universities struggle to address federal funding cuts, junior academics will be less likely to gain tenure, and faculty in grant-funded positions may lose their jobs. Universities may also become hesitant to hire people who work in these areas because their research has essentially been banned from federal funding options.

Loss of community trust

Trans and nonbinary people have often been studied under opportunistic and demeaning circumstances. This includes when researchers collect data for their own gains but return little to the communities they work with, or when they do research that perpetuates theories that pathologize those communities. As a result, many are often reluctant to participate in research.

To overcome this reluctance, we grounded our study on community input. We involved an advisory board composed of local trans and nonbinary community members who helped to inform how we conducted our study and measured our findings.

Our work on resilience has been inspired by feedback we received from previous research participants who said that “[trans people] matter even when not in pain.”

Abruptly terminating projects like these can break down trust between researchers and the populations they study.

Loss of scientific knowledge

Research that focuses on the strengths of trans and nonbinary communities is in its infancy. The termination of our grant has led to the loss of the insights our study would have provided on ways to improve health among trans and nonbinary people and future work that would have built off our findings. Resilience is a process that takes time to unfold, and we had not finished the longitudinal data collection in our study – nor will we have the protected time to publish and share other findings from this work.

Meanwhile, the Department of Health and Human Services released a May 2025 report stating that there is not enough evidence to support gender-affirming care for young people, contradicting decades of scientific research. Scientists, researchers and medical professional organizations have widely criticized the report as misrepresenting study findings, dismissing research showing benefits to gender-affirming care, and promoting misinformation rejected by major medical associations. Instead, the report recommends “exploratory therapy,” which experts have likened to discredited conversion therapy.

Hands clapping beside a small trans flag on top of a pile of signs, one reading 'WE'RE STILL HERE,'
Transgender and nonbinary people continue to exist, regardless of legislation.
Kayla Bartkowski/Getty Images

Despite claims that there is insufficient research on gender-affirming care and more data is needed on the health of trans and nonbinary people, the government has chosen to divest from actual scientific research about trans and nonbinary people’s lives.

Loss of taxpayer dollars

The termination of our grant means we are no longer able to achieve the aims of the project, which depended on the collection and analysis of data over time. This wastes the three years of NIH funding already spent on the project.

Scientists and experts who participated in the review of our NIH grant proposal rated our project more highly than 96% of the projects we competed against. Even so, the government made the unscientific choice to override these decisions and terminate our work.

Millions of taxpayer dollars have already been invested in these grants to improve the health of not only trans and nonbinary people, but also American society as a whole. With the termination of these grants, few will get to see the benefits of this investment.

The Conversation

Jae A. Puckett has received funding from the National Institutes of Health.

Paz Galupo has received funding from the National Institutes of Health.

ref. Our trans health study was terminated by the government – the effects of abrupt NIH grant cuts ripple across science and society – https://theconversation.com/our-trans-health-study-was-terminated-by-the-government-the-effects-of-abrupt-nih-grant-cuts-ripple-across-science-and-society-254021

Observers of workplace mistreatment react as strongly as the victims − at times with a surprising amount of victim blaming

Source: The Conversation – USA (2) – By Jason Colquitt, Professor of Management, Mendoza College of Business, University of Notre Dame

Workplace mistreatment harms observers, too. AP Photo/Ross D. Franklin

Picture this: On your way out of the office, you notice a manager berating an employee. You assume the worker made some sort of mistake, but the manager’s behavior seems unprofessional. Later, as you’re preparing dinner, is the scene still weighing on you – or is it out of sight, out of mind?

If you think you’d still be bothered, you’re not alone. It turns out that simply observing mistreatment at work can have a surprisingly strong impact on people, even for those not directly involved. That’s according to new research led by Edwyna Hill, co-authored by Rachel Burgess, Manuela Priesemuth, Jefferson McClain and me, published in the Journal of Applied Psychology.

Using a method called meta-analysis – which takes results from many different studies and combines them to produce an overall set of findings – we reviewed the growing body of research on what management professors like me call “third-party perceptions of mistreatment.” In this context, “third parties” are people who observe mistreatment between a perpetrator and the victim, who are the first and second parties.

We looked at 158 studies published in 105 journal articles involving thousands of participants. Those studies explored a number of different forms of workplace mistreatment ranging from incivility to abusive supervision and sexual harassment. Some of those studies took part in actual workplaces, while others examined mistreatment in tightly controlled laboratory settings.

The results were striking: We found that observing a co-worker being mistreated on the job has significant effects on the observers’ emotions. In fact, we found that observers of mistreatment may be as affected by what happened as the people actually involved in the event.

These reactions fall along a spectrum – some helpful, others less so. On the encouraging side, we found that observers tend to judge perpetrators and feel empathy for victims. These reactions discourage mistreatment by creating a climate that favors the victim. On the other hand, we found that observers may also enjoy seeing their co-workers suffer – an emotion called “schadenfreude” – or blame the victim. These sorts of reactions damage team dynamics and discourage people from reporting mistreatment.

Why it matters

These findings matter because mistreatment in the workplace is disturbingly common – and even more frequently observed than experienced. One recent study found that 34% of employees have experienced workplace mistreatment firsthand, but 44% have observed it happening to someone else. In other words, nearly half of workers have likely seen a scenario like the one described at the start of this article.

Unfortunately, the human resources playbook on workplace mistreatment rarely takes third parties into account. Some investigation occurs, potentially resulting in some punishment for the perpetrator and some support for the victim. A more effective response to workplace mistreatment would recognize that the harm often extends beyond the victim – and that observers, too, may need support.

What still isn’t known

What’s needed now is a better understanding of the nuances involved in observing mistreatment. Why do some observers react with empathy, while others derive pleasure from the suffering of others? And why might observers feel empathy for the victim but still respond by judging or blaming them? Answering these questions is a crucial next step for researchers and leaders seeking to design more effective workplace policies.

The Research Brief is a short take on interesting academic work.

The Conversation

Jason Colquitt does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Observers of workplace mistreatment react as strongly as the victims − at times with a surprising amount of victim blaming – https://theconversation.com/observers-of-workplace-mistreatment-react-as-strongly-as-the-victims-at-times-with-a-surprising-amount-of-victim-blaming-255761

For Trump’s ‘no taxes on tips,’ the devil is in the details

Source: The Conversation – USA (2) – By Jay L. Zagorsky, Associate Professor Questrom School of Business, Boston University

President Donald Trump’s promise to eliminate taxes on tips may sound like a windfall for service workers — but the fine print in Congress’ latest tax bill tells a more complex story.

Right now, Republican lawmakers are advancing the “One Big Beautiful Bill Act” — a sprawling, 1,100-page proposal that aims to change everything from tax incentives for electric vehicles to health care. It also includes a proposal to end taxes on tips, which could potentially affect around 4 million American workers. The Senate has recently passed its own version – the No Tax on Tips Act.

The idea started getting attention when Trump raised it during a 2024 campaign stop in Las Vegas, a place where tipping is woven into the economy. And the headlines and press releases sound great — especially if you’re a waiter, bartender or anyone else who depends on tips for a living. That may be why both Democrats and Republicans alike broadly support the concept. However, like most of life, the devil is in the details.

I’m a business-school economist who has written about tipping, and I’ve looked closely at the language of the proposed laws. So, what exactly has Trump promised, and how does it measure up to what’s in the bills? Let’s start with his pledge.

The promise of money that’s ‘100% yours’

Back in January 2025, Trump said, “If you’re a restaurant worker, a server, a valet, a bellhop, a bartender, one of my caddies … your tips will be 100% yours.” That sounds like a boost in tipped workers’ income.

But when you look at the current situation, it becomes clear that the reality is far more complicated.

First, the new tax break only applies to tips the government knows about — and a lot of that income currently flies under the radar. Tipped workers who get cash tips are supposed to report it to the IRS via form 4137 if their employer doesn’t report it for them. If a worker gets a cash tip today and doesn’t report it, they already get 100% of the money. No one really knows what percentage of tips are unreported, but an old IRS estimate pegs it at about 40%.

What’s more, the current tax code defines tips only as payments where the customer determines the tip amount. If a restaurant charges a fixed 18% service charge, or there’s an extra fee for room service, those aren’t tips in the government’s eyes. This means some tipped workers who think service charges are tips will overestimate the new rule’s impact on their finances.

How the new bills would affect tipped workers

The “Big Beautiful Bill” would create a new tax code section under “itemized deductions” This area of the tax code already includes text that creates health savings accounts and gives students deductions for interest on their college loans.

What’s in the new section?

First, the bill specifies that this tax break applies just to “any cash tip.” The IRS classifies payments by credit card, debit card and even checks as “cash tips.” Unfortunately for workers in Las Vegas, noncash tips, like casino chips, aren’t part of the bill.

While the House bill limits the deduction to people earning less than US$160,000 the Senate bill caps the deduction to the first $25,000 of tips earned. Everything over that is taxed.

Second, the current House bill ends this special tax-free deal on Dec. 31, 2028. That means these special benefits would only last three years, unless Congress extends the law. The Senate bill does not include such a deadline.

Third, the exemption is only available to jobs that typically receive tips. The Treasury secretary is required to define the list of tipped occupations. If an occupation isn’t on the list, the law doesn’t apply.

I wonder how many occupations won’t make the list. For example, some camp counselors get tips at the end of the summer. But it’s unclear the Treasury Department will include these workers as a covered group, since counselors only make up a proportion of summer camp staff. Not making the list is a real problem.

And while the new proposal gives workers an income tax break, there’s nothing in either bill about skipping FICA payments on the tipped earnings. Workers are still required to contribute slightly more than 7% in Social Security and Medicare taxes on all tips they report, which won’t benefit them until retirement. This isn’t an oversight — the bill specifically says employees must furnish a valid Social Security number to get the tax benefits.

There are a few other ways the legislation might benefit workers less than it seems at first glance. Instituting no taxes on tips could mean tipped employees feel more pressure to split their tips with other employees, like busboys, chefs and hosts. After all, these untipped workers also contribute to the customer experience, and often at low wages.

And finally, many Americans are tired of tipping. Knowing that servers don’t have to pay taxes might make some to cut back on it even more.

The specifics of any piece of legislation are subject to change until the moment Congress sends it to the president to be signed. However, as now written, I think the bills aren’t as generous to tipped workers as Trump made it sound on the campaign trail.

The Conversation

Jay L. Zagorsky does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. For Trump’s ‘no taxes on tips,’ the devil is in the details – https://theconversation.com/for-trumps-no-taxes-on-tips-the-devil-is-in-the-details-258276

You’re probably richer than you think because of the safety net – but you’d have more of that hidden wealth if you lived in Norway

Source: The Conversation – USA (2) – By Robert Manduca, Assistant Professor of Sociology, University of Michigan

You may be wealthier than you realize. Deagreez/iStock via Getty Images Plus

How wealthy are you?

Like most people, you probably would do some math before answering this question. You would add up the money in your bank accounts, the value of your investments and any equity in a home you own, then subtract your debts, such as mortgages and car loans.

But many economists believe this approach, known as calculating your net worth, leaves out a big chunk of your wealth: the benefits you’ll get in the future from Social Security, if you live in the United States, or similar government benefits programs that help retirees pay their bills in other countries.

As a sociologist who studies income and wealth inequality, I wanted to figure out just how much government safety net programs are worth to their recipients, and whether they truly can substitute for private savings.

A $40 trillion trove

A team of researchers recently estimated that future Social Security payments amounted to more than US$40 trillion as of 2019 – about $123,000 for everyone in the U.S. That huge number, which is not adjusted for inflation, was nearly one-third of the $110 trillion of Americans’ collective net worth in that year.

In a recent peer-reviewed study, published in April 2025 in Socio-Economic Review, I found that even this expanded definition of wealth leaves some important things out: unemployment insurance, the child tax credit and other widely available benefits. People who have access to these programs don’t have to dip into their savings as much when unexpected costs come up.

Social Security is by far the largest of these programs. As of 2019, the typical worker nearing retirement had banked about $412,000 in future Social Security benefits, I found – nearly as much as the $472,000 in private retirement savings such workers had. This estimate doesn’t include Social Security benefits to orphans, widows or people with disabilities.

The value of Social Security retirement benefits varies according to workers’ income and work history, ranging from $271,000 for the poorest 10% of recipients to $669,000 for the richest 10%.

Benefits from smaller safety net programs can also add up. Because some programs differ by state, I analyzed California and Texas, the two largest states. In California, I calculated that the average 45-year-old worker can count on almost $12,000 in unemployment insurance over 26 weeks, while in Texas the same worker would be eligible for more than $15,000 over the same period.

Meanwhile, under current law, many families having a child in 2025 can expect to receive about $29,000 through the federal child tax credit over the course of that kid’s lifetime.

Texas doesn’t mandate paid family leave, but California requires that each parent receive eight weeks of their salary. That’s worth another $13,000 to a family earning $90,000 a year – the median in my study – and more if the parents have higher incomes.

Where there’s even more hidden wealth

These somewhat hidden sources of wealth are worth far more in many other countries, especially Scandinavian ones. Norway provides a useful contrast.

The typical Norwegian worker retires with more than $510,000 in public pension wealth, I calculated. The exact amount they collect will vary depending on what they’ve earned and how long they live, as is the case with Social Security. But, unlike in the U.S., if they get sick, Norwegians are eligible for a up to a year of paid sick leave – worth about $57,000 to the median worker.

Norwegians can get unemployment insurance benefits for almost two years, amounting to $70,000 for the average worker, depending on their wages. And the combination of Norway’s child benefit and parental leave is worth between $60,000 and $80,000 from the time each child is born until they turn 18, depending on the parents’ exact income.

In the past few years, researchers have estimated the wealth value of public pensions – though not other government benefits – in several countries, including Australia, Austria, Germany, Poland and Switzerland, among others.

In many nations, this value rivals or exceeds that of all stocks, real estate and other private assets held by their residents combined.

Because so many people are eligible for Social Security or its equivalent public pension programs in other countries, there is also much less inequality in total retirement wealth than in standard measures of net worth.

Wealth vs. income

Wealth is much more unequally distributed than income just about everywhere. In the United States, for example, the richest 5% of the population has 32% of all income, but 70% of all wealth.

Wealth inequality has grown over time, and the Black-white wealth gap in the United States is particularly large. While typical Black families have incomes that are about 56% of what white families earn, they own only 18% as much wealth as the typical white family.

For these reasons, many politicians, scholars and activists have proposed ambitious policies to reduce inequality in private wealth, such as a wealth tax. Another idea gaining in popularity is to start issuing “baby bonds,” which give each newborn a prefunded savings account.

Wealth embedded in government benefits offers a complementary method of addressing wealth inequality. Even today, when Social Security and similar pension programs in other places are counted alongside private savings, inequality in retirement wealth is much lower than in privately held wealth alone.

Less flexible source of wealth

To be sure, the wealth you’re eventually due through Social Security and other government programs isn’t the same as the private assets you might own.

You can’t sell or borrow against your future Social Security benefits to meet an unexpected expense or make a down payment on a home. And if you die before reaching retirement age, you won’t receive any payments from the Social Security system yourself, although your spouse or heirs may be eligible for survivor benefits.

Also, government programs are not set in stone. Eligibility requirements can change, and benefit levels can be cut.

For instance, if the Social Security trust fund is depleted, retirees could see their benefits decline. But private wealth is also never guaranteed to last: Stock values can fluctuate wildly, and inflation erodes the value of any cash you’ve saved over time.

For these reasons, having a combination of private savings and government benefits offers the most promising way for everyone to prepare for their future. This can also help society address wealth inequality.

The Conversation

Robert Manduca has received funding from the Washington Center for Equitable Growth.

ref. You’re probably richer than you think because of the safety net – but you’d have more of that hidden wealth if you lived in Norway – https://theconversation.com/youre-probably-richer-than-you-think-because-of-the-safety-net-but-youd-have-more-of-that-hidden-wealth-if-you-lived-in-norway-255833

Federal R&D funding boosts productivity for the whole economy − making big cuts to such government spending unwise

Source: The Conversation – USA (2) – By Andrew Fieldhouse, Visiting Assistant Professor of Finance, Texas A&M University

Research can make everyone better off.
Emilija Manevska/Moment via Getty Images

Large cuts to government-funded research and development can endanger American innovation – and the vital productivity gains it supports.

The Trump administration has already canceled at least US$1.8 billion in research grants previously awarded by the National Institutes of Health, which supports biomedical and health research. Its preliminary budget request for the 2026 fiscal year proposed slashing federal funding for scientific and health research, cutting the NIH budget by another $18 billion – nearly a 40% reduction. The National Science Foundation, which funds much of the basic scientific research conducted at universities, would see its budget slashed by $5 billion – cutting it by more than half.

Research and development spending might strike you as an unnecessary expense for the government. Perhaps you see it as something universities or private companies should instead be paying for themselves. But as research I’ve conducted shows, if the government were to abandon its long-standing practice of investing in R&D, it would significantly slow the pace of U.S. innovation and economic growth.

I’m an economist at Texas A&M University. For the past five years, I’ve been studying the long-term economic benefits of government-funded R&D with Karel Mertens, an economist at the Federal Reserve Bank of Dallas. We have found that government R&D spending on everything from the Apollo space program to the Human Genome Project has fueled innovation. We also found that federal R&D spending has played a significant role in boosting U.S. productivity and spurring economic growth over the past 75 years.

Measuring productivity

Productivity rises when economic growth is caused by technological progress and know-how, rather than workers putting in more hours or employers using more equipment and machinery. Economists believe that higher productivity fuels economic growth and raises living standards over the long run.

U.S. productivity growth fell by half, from an average of roughly 2% a year in the 1950s and 1960s to about 1%, starting in the early 1970s. This deceleration eerily coincides with a big decline in government R&D spending, which peaked at over 1.8% of gross domestic product in the mid-1960s. Government R&D spending has declined since then and has fallen by half – to below 0.9% of GDP – today.

Government R&D spending encompasses all innovative work the government directly pays for, regardless of who does it. Private companies and universities conduct a lot of this work, as do national labs and federal agencies, like the NIH.

Correlation is not causation. But in a Dallas Fed working paper released in November 2024, my co-author and I identified a strong causal link between government R&D spending and U.S. productivity growth. We estimated that government R&D spending consistently accounted for more than 20% of all U.S. productivity growth since World War II. And a decline in that spending after the 1960s can account for nearly one-fourth of the deceleration in productivity since then.

These significant productivity gains came from R&D investments by federal agencies that are not focused on national defense. Examples include the NIH’s support for biomedical research, the Department of Energy’s funding for physics and energy research, and NASA’s spending on aeronautics and space exploration technologies.

Not all productivity growth is driven by government R&D. Economists think public investment in physical infrastructure, such as construction of the interstate highway system starting in the Eisenhower administration, also spurred productivity growth. And U.S. productivity growth briefly accelerated during the information technology boom of the late 1990s and early 2000s, which we do not attribute to government R&D investment.

More R than D

We have found that government R&D investment is more effective than private R&D spending at driving productivity, likely because the private sector tends to spend much more on the development side of R&D, while the public sector tends to emphasize research.

Economists believe the private sector will naturally underinvest in more fundamental research because it is harder to patent and profit from this work. We think our higher estimated returns on nondefense R&D reflect greater productivity benefits from fundamental research, which generates more widely shared knowledge, than from private sector spending on development.

Like the private sector, the Department of Defense spends much more on development – of weapons and military technology – than on fundamental research. We found only inconclusive evidence on the returns on military R&D.

R&D work funded by the Defense Department also tends to initially be classified and kept secret from geopolitical rivals, such as the Manhattan Project that developed the atomic bomb. As a result, gains for the whole economy from that source of innovation could take longer to materialize than the 15-year time frame we have studied.

A woman looks through a microscope at a sample.
Research takes not just time but money, and the government is now cutting that funding.
Nitat Termmee/Moment via Getty Images

Role of Congress

The high returns on nondefense R&D that we estimated suggest that Congress has historically underinvested in these areas. For instance, the productivity gains from nondefense R&D are at least 10 times higher than those from government investments in highways, bridges and other kinds of physical infrastructure. The government has also invested far more in physical infrastructure than R&D over the past 75 years. Increasing R&D investment would take advantage of these higher returns and gradually reduce them because of diminishing marginal returns to additional investment.

So why is the government not spending substantially more on R&D?

One argument sometimes heard against federal R&D spending is that it displaces, or “crowds out,” R&D spending the private sector would otherwise undertake. For instance, the administration’s budget request proposed reducing or eliminating NASA space technology programs it deemed “better suited to private sector research and development.”

But my colleague and I have found that government spending on R&D complements private investment. An additional dollar of government nondefense R&D spending causes the private sector to increase its R&D spending by an additional 20 cents. So we expect budget cuts to the NIH, NSF and NASA to actually reduce R&D spending by companies, which is also bad for economic growth.

Federal R&D spending is also often on the chopping block whenever Congress focuses on deficit reduction. In part, that likely reflects the gradual nature of the economic benefits from government-funded R&D, which are at odds with the country’s four-year electoral cycles.

Similarly, the benefits from NIH spending on biomedical research are usually less visible than government spending on Medicare or Medicaid, which are health insurance programs for those 65 years and older and those with low incomes or disabilities. But Medicare or Medicaid help Americans buy prescription drugs and medical devices that were invented with the help of NIH-funded research.

Even if the benefits of government R&D are slow to materialize or are harder to see than those from other government programs, our research suggests that the U.S. economy will be less innovative and productive – and Americans will be worse off for it – if Congress agrees to deep cuts to science and research funding.

The views expressed in the Dallas Fed working paper are the views of the authors only and do not necessarily reflect the views of the Federal Reserve Bank of Dallas or the Federal Reserve System.

The Conversation

Andrew Fieldhouse does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Federal R&D funding boosts productivity for the whole economy − making big cuts to such government spending unwise – https://theconversation.com/federal-randd-funding-boosts-productivity-for-the-whole-economy-making-big-cuts-to-such-government-spending-unwise-255823

The biggest barrier to AI adoption in the business world isn’t tech – it’s user confidence

Source: The Conversation – USA (2) – By Greg Edwards, Adjunct Lecturer, Missouri University of Science and Technology

Believe in your own decision-making. Feodora Chiosea/Getty Images Plus

The Little Engine That Could wasn’t the most powerful train, but she believed in herself. The story goes that, as she set off to climb a steep mountain, she repeated: “I think I can, I think I can.”

That simple phrase from a children’s story still holds a lesson for today’s business world – especially when it comes to artificial intelligence.

AI is no longer a distant promise out of science fiction. It’s here and already beginning to transform industries. But despite the hundreds of billions of dollars spent on developing AI models and platforms, adoption remains slow for many employees, with a recent Pew Research Center survey finding that 63% of U.S. workers use AI minimally or not at all in their jobs.

The reason? It can often come down to what researchers call technological self-efficacy, or, put simply, a person’s belief in their ability to use technology effectively.

In my research on this topic, I found that many people who avoid using new technology aren’t truly against it – instead, they just don’t feel equipped to use it in their specific jobs. So rather than risk getting it wrong, they choose to keep their distance.

And that’s where many organizations derail. They focus on building the engine, but don’t fully fuel the confidence that workers need to get it moving.

What self-efficacy has to do with AI

Albert Bandura, the psychologist who developed the theory of self-efficacy, noted that skill alone doesn’t determine people’s behavior. What matters more is a person’s belief in their ability to use that skill effectively.

In my study of teachers in 1:1 technology environments – classrooms where each student is equipped with a digital device like a laptop or tablet – this was clear. I found that even teachers with access to powerful digital tools don’t always feel confident using them. And when they lack confidence, they may avoid the technology or use it in limited, superficial ways.

The same holds true in today’s AI-equipped workplace. Leaders may be quick to roll out new tools and want fast results. But employees may hesitate, wondering how it applies to their roles, whether they’ll use it correctly, or if they’ll appear less competent – or even unethical – for relying on it.

Beneath that hesitation may also be the all-too-familiar fear of one day being replaced by technology.

Going back to train analogies, think of John Henry, the 19th-century folk hero. As the story goes, Henry was a railroad worker who was famous for his strength. When a steam-powered machine threatened to replace him, he raced it – and won. But the victory came at a cost: He collapsed and died shortly afterward.

Henry’s story is a lesson in how resisting new technology through sheer willpower can be self-defeating. Rather than leaving some employees feeling like they have to outmuscle or outperform AI, organizations should invest in helping them understand how to work with it – so they don’t feel like they need to work against it.

Relevant and role-specific training

Many organizations do offer training related to using AI. But these programs are often too broad, covering topics like how to log into different programs, what the interfaces look like, or what AI “generally” can do.

In 2025, with the number of AI tools at our disposal, ranging from conversational chatbots and content creation platforms to advanced data analytics and workflow automation programs, that’s not enough.

In my study, participants consistently said they benefited most from training that was “district-specific,” meaning tailored to the devices, software and situations they faced daily with their specific subject areas and grade levels.

Translation for the corporate world? Training needs to be job-specific and user-centered – not one-size-fits-all.

The generational divide

It’s not exactly shocking: Younger workers tend to feel more confident using technology than older ones. Gen Z and millennials are digital natives – they’ve grown up with digital technologies as part of their daily lives.

Gen X and boomers, on the other hand, often had to adapt to using digital technologies mid-career. As a result, they may feel less capable and be more likely to dismiss AI and its possibilities. And if their few forays into AI are frustrating or lead to mistakes, that first impression is likely to stick.

When generative AI tools were first launched commercially, they were more likely to hallucinate and confidently spit out incorrect information. Remember when Google demoed its Bard AI tool in 2023 and its factual error led to its parent company losing US$100 billion in market value? Or when an attorney made headlines for citing fabricated cases courtesy of ChatGPT?

Moments like those likely reinforced skepticism – especially among workers already unsure about AI’s reliability. But the technology has already come a long way in a relatively short period of time.

The solution to getting those who may be slower to embrace AI isn’t to push them harder, but to coach them and consider their backgrounds.

What effective AI training looks like

Bandura identified four key sources that shape a person’s belief in their ability to succeed:

  1. Mastery experiences, or personal success

  2. Vicarious experiences, or seeing others in similar positions succeed

  3. Verbal persuasion, or positive feedback

  4. Physiological and emotional states, or someone’s mood, energy, anxiety and so forth.

In my research on educators, I saw how these concepts made a difference, and the same approach can apply to AI in the corporate world – or in virtually any environment in which a person needs to build self-efficacy.

In the workplace, this could be accomplished with cohort-based trainings that include feedback loops – regular communication between leaders and employees about growth, improvement and more – along with content that can be customized to employees’ needs and roles. Organizations can also experiment with engaging formats like PricewaterhouseCoopers’ prompting parties, which provide low-stakes opportunities for employees to build confidence and try new AI programs.

In “Pokemon Go!,” it’s possible to level up by stacking lots of small, low-stakes wins and gaining experience points along the way. Workplaces could approach AI training the same way, giving employees frequent, simple opportunities tied to their actual work to steadily build confidence and skill.

The curriculum doesn’t have to be revolutionary. It just needs to follow these principles and not fall victim to death by PowerPoint, or end up being generic training that isn’t applicable to specific roles in the workplace.

As organizations continue to invest heavily in developing and accessing AI technologies, it’s also essential that they invest in the people who will use them. AI might change what the workforce looks like, but there’s still going to be a workforce. And when people are well trained, AI can make both them and the outfits they work for significantly more effective.

The Conversation

Greg Edwards does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. The biggest barrier to AI adoption in the business world isn’t tech – it’s user confidence – https://theconversation.com/the-biggest-barrier-to-ai-adoption-in-the-business-world-isnt-tech-its-user-confidence-257308

Is AI sparking a cognitive revolution that will lead to mediocrity and conformity?

Source: The Conversation – USA (2) – By Wolfgang Messner, Clinical Professor of International Business, University of South Carolina

The Industrial Revolution mechanized production. Today, there’s a similar risk with the automation of thought. kutaytanir/E+ via Getty Images

Artificial Intelligence began as a quest to simulate the human brain.

Is it now in the process of transforming the human brain’s role in daily life?

The Industrial Revolution diminished the need for manual labor. As someone who researches the application of AI in international business, I can’t help but wonder whether it is spurring a cognitive revolution, obviating the need for certain cognitive processes as it reshapes how students, workers and artists write, design and decide.

Graphic designers use AI to quickly create a slate of potential logos for their clients. Marketers test how AI-generated customer profiles will respond to ad campaigns. Software engineers deploy AI coding assistants. Students wield AI to draft essays in record time – and teachers use similar tools to provide feedback.

The economic and cultural implications are profound.

What happens to the writer who no longer struggles with the perfect phrase, or the designer who no longer sketches dozens of variations before finding the right one? Will they become increasingly dependent on these cognitive prosthetics, similar to how using GPS diminishes navigation skills? And how can human creativity and critical thinking be preserved in an age of algorithmic abundance?

Echoes of the Industrial Revolution

We’ve been here before.

The Industrial Revolution replaced artisanal craftsmanship with mechanized production, enabling goods to be replicated and manufactured on a mass scale.

Shoes, cars and crops could be produced efficiently and uniformly. But products also became more bland, predictable and stripped of individuality. Craftsmanship retreated to the margins, as a luxury or a form of resistance.

Two female workers wearing blue surrounded by piles of stuffed animals.
Mass production strips goods of their individuality.
Costfoto/NurPhoto via Getty Images

Today, there’s a similar risk with the automation of thought. Generative AI tempts users to conflate speed with quality, productivity with originality.

The danger is not that AI will fail us, but that people will accept the mediocrity of its outputs as the norm. When everything is fast, frictionless and “good enough,” there’s the risk of losing the depth, nuance and intellectual richness that define exceptional human work.

The rise of algorithmic mediocrity

Despite the name, AI doesn’t actually think.

Tools such as ChatGPT, Claude and Gemini process massive volumes of human-created content, often scraped from the internet without context or permission. Their outputs are statistical predictions of what word or pixel is likely to follow based on patterns in data they’ve processed.

They are, in essence, mirrors that reflect collective human creative output back to users – rearranged and recombined, but fundamentally derivative.

And this, in many ways, is precisely why they work so well.

Consider the countless emails people write, the slide decks strategy consultants prepare and the advertisements that suffuse social media feeds. Much of this content follows predictable patterns and established formulas. It has been there before, in one form or the other.

Generative AI excels at producing competent-sounding content – lists, summaries, press releases, advertisements – that bears the signs of human creation without that spark of ingenuity. It thrives in contexts where the demand for originality is low and when “good enough” is, well, good enough.

When AI sparks – and stifles – creativity

Yet, even in a world of formulaic content, AI can be surprisingly helpful.

In one set of experiments, researchers tasked people with completing various creative challenges. They found that those who used generative AI produced ideas that were, on average, more creative, outperforming participants who used web searches or no aids at all. In other words, AI can, in fact, elevate baseline creative performance.

However, further analysis revealed a critical trade-off: Reliance on AI systems for brainstorming significantly reduced the diversity of ideas produced, which is a crucial element for creative breakthroughs. The systems tend to converge toward a predictable middle rather than exploring unconventional possibilities at the edges.

I wasn’t surprised by these findings. My students and I have found that the outputs of generative AI systems are most closely aligned with the values and worldviews of wealthy, English-speaking nations. This inherent bias quite naturally constrains the diversity of ideas these systems can generate.

More troubling still, brief interactions with AI systems can subtly reshape how people approach problems and imagine solutions.

One set of experiments tasked participants with making medical diagnoses with the help of AI. However, the researchers designed the experiment so that AI would give some participants flawed suggestions. Even after those participants stopped using the AI tool, they tended to unconsciously adopt those biases and make errors in their own decisions.

What begins as a convenient shortcut risks becoming a self-reinforcing loop of diminishing originality – not because these tools produce objectively poor content, but because they quietly narrow the bandwidth of human creativity itself.

Navigating the cognitive revolution

True creativity, innovation and research are not just probabilistic recombinations of past data. They require conceptual leaps, cross-disciplinary thinking and real-world experience. These are qualities AI cannot replicate. It cannot invent the future. It can only remix the past.

What AI generates may satisfy a short-term need: a quick summary, a plausible design, a passable script. But it rarely transforms, and genuine originality risks being drowned in a sea of algorithmic sameness.

The challenge, then, isn’t just technological. It’s cultural.

How can the irreplaceable value of human creativity be preserved amid this flood of synthetic content?

The historical parallel with industrialization offers both caution and hope. Mechanization displaced many workers but also gave rise to new forms of labor, education and prosperity. Similarly, while AI systems may automate some cognitive tasks, they may also open up new intellectual frontiers by simulating intellectual abilities. In doing so, they may take on creative responsibilities, such as inventing novel processes or developing criteria to evaluate their own outputs.

This transformation is only at its early stages. Each new generation of AI models will produce outputs that once seemed like the purview of science fiction. The responsibility lies with professionals, educators and policymakers to shape this cognitive revolution with intention.

Will it lead to intellectual flourishing or dependency? To a renaissance of human creativity or its gradual obsolescence?

The answer, for now, is up in the air.

The Conversation

Wolfgang Messner receives funding from Center for International Business Education and Research (CIBER) at the University of South Carolina.

ref. Is AI sparking a cognitive revolution that will lead to mediocrity and conformity? – https://theconversation.com/is-ai-sparking-a-cognitive-revolution-that-will-lead-to-mediocrity-and-conformity-256940

The Michelin Guide is Eurocentric and elitist − yet it will soon be an arbiter of culinary excellence in Philly

Source: The Conversation – USA (2) – By Tulasi Srinivas, Professor of Anthropology, Religion and Transnational Studies, Emerson College

Could a Philly cheesesteak joint actually get a Michelin star?

The Michelin Red Guide is coming to Philadelphia, and inspectors are already scouting local restaurants to award the famed Michelin star.

Michelin says the selected restaurants will be announced in a Northeast cities edition celebration later this year. Boston will also be included for the first time.

As an anthropologist of ethics and religion who has an expertise in food studies, I read the announcement with some curiosity and a lot of questions. I had seen this small red guide revered by chefs and gourmands alike around the globe.

How did the Michelin guide begin reviewing restaurants? And what makes it an authority on cuisine worldwide?

Hardback copies of a red book that says 'Michelin France 2025'
The Michelin Guide has retained its iconic red cover for more than a century.
Matthieu Delaty/Hans Lucas/AFP via Getty Images

From tires to terrines

It all began in 1889 in the small town of Clermont-Ferrand in the Auvergne-Rhône-Alpes region of France. Brothers Andre and Edouard Michelin founded their world-famous Michelin tire company, fueled by a grand vision for France’s automobile industry – though there were fewer than 3,000 cars at the time in the whole of France.

To encourage travel, they distributed a red-bound guide filled with maps and helpful tips on routes and destinations. Initially free to automobile owners, it soon started to sell for seven francs – roughly US$1.50 at the time. The guide later added lists of restaurants and eateries along with other points of travel interest.

Being French, readers had questions about the quality of the food at these establishments, so the brothers started a rating system of a single star to denote high-quality establishments worthy of their elite customers and their fancy automobiles.

But that wasn’t enough for discerning diners. So the guide created a discriminating hierarchy of one-, two- and three-star establishments: one star for “high-quality cooking worth a stop,” two stars for “excellent cooking worth a detour,” and three stars for “exceptional cuisine worth a special journey.”

An army of anonymous inspectors

How do restaurants get a Michelin star – or three? According to the guide, restaurants have to be consistently extraordinary to garner three stars. To ensure a restaurant’s excellence is consistent, Michelin has to surveil them repeatedly, which it does using a stable of mysterious diners called “inspectors.”

You might be thinking of Inspector Clouseau, the klutzy, misguided detective from the Pink Panther movies played by the inimitable Peter Sellers.

Mais non!

Michelin inspectors are dreaded anonymous restaurant reviewers. They dine at restaurants unannounced and undercover, and inevitably write scathing critiques of everything – ingredients, food, chefs and dishes – in their reports.

In the 2015 Bradley Cooper movie “Burnt,” the restaurant is obsessed with the mystery Michelin inspectors, who dine incognito. Restaurateur Tony, played by Daniel Bruhl, instructs the dining room staff on how to spot them:

“No one knows who they are. No one. They come. They eat. They go. But they have habits. One orders the tasting menu, the other orders a la carte. Always. They order a half a bottle of wine. They ask for tap water. They are polite. But attention! They may place a fork on the floor to see if you notice.”

Woman in crisp white chef uniform stands in sleek restaurant
Japan’s Chizuko Kimura, a Michelin-star chef, at her restaurant Sushi Shunei in Paris.
Julien De Rosa/AFP via Getty Images

Holy grail for chefs

The inherent elitism of the iconic Michelin Guide was central, though left unspoken.

To counteract the guide’s existential classist bias, Michelin introduced the Bib Gourmand award in 1997 to identify affordable “best value for money restaurants.” Bib Gourmand restaurants are easier on the wallet than Michelin-starred establishments and offer casual dining. The award’s logo is the Bibendum, also known as the inflatable Michelin Man, licking his lips.

In 2020, the guide introduced yet another award: the green star for eateries with farm-to-table fresh quality.

Today, the Michelin Guide has become a vaunted yet controversial subjective yardstick by which restaurants are measured.

Getting a Michelin star has become a holy grail for many chefs, a Nobel prize of cuisine. Chefs speak of earning a star as an honor they have envisaged for a lifetime, and starred chefs often become celebrities in their own right.

The 2022 dark comedy “The Menu” stars Ralph Fiennes as one such celebrity Michelin chef, whose exclusive island restaurant has a lavish modern menu that culminates in a mystery performance. His greatest fear is losing his Michelin star – a cause for lament, mental health crises and, sometimes, murder.

Three stars for Eurocentrism

The Michelin Guide evaluates restaurants on the quality of their ingredients, the mastery of their flavors, the chef’s personality in their cooking, the harmony of flavors, and the consistency of the cuisine over the course of numerous visits.

Yet somehow, all these factors, seemingly easily translatable across the world’s cuisines, has led to an intensely parochial guide.

Only in 2007, 118 years after its inception, did the guide recognize Japanese cuisine as worthy of its gaze. Soon after, stars rained down on Tokyo’s many stellar eateries.

On a contemporary map charting where the Michelin Guide is found, huge swathes of the world are missing. There is no Michelin Guide in India, one of the world’s greatest and oldest cuisines, or in Africa with its multiplicity of cultural flavors.

Perhaps a side of racism with the boeuf bourguignon?

Despite a movement to decolonize food by rethinking colonial legacies of power and extractive ways of eating, Michelin has derived its stellar reputation primarily from reviewing metropolitan European cuisine. It has celebrated obscure European gastronomic processes such as “fire cooking” in Stockholm’s famous Ekstedt restaurant, and new chemical processes such as “molecular gastronomy” in Spain’s famed el Bulli eatery.

One could say Michelin is a somewhat conservative enterprise. Rather than leading the way, it has followed consumers’ expanding palates.

In 2024, in a rare break with tradition, Michelin awarded one star to a small family-run taqueria, El Califa De León, in Mexico City. The taqueria is known for its signature tacos de gaonera – thinly sliced rib-eye steak cooked in lard on fresh corn masa tortillas with a squeeze of lime.

Some discerning diners worried that Michelin had gone downhill.

Quelle horreur!

The decision to give a star to a Mexican restaurant that is essentially just a steel counter, fridge and griddle was so unlike Michelin that it resorted to describing El Califa tacos as “elemental and pure”; language previously reserved only to describe elite cuisine.

A man in blue uniform and black apron places thin slices of meat on a griddle
The Michelin-starred taqueria El Califa de León in Mexico City is known for its tacos de gaonera.
Apolline Guillerot-Malick/SOPA Images/LightRocket via Getty Images

A big bill

Soon-to-be-reviewed Philadelphia boasts a portfolio of epicurean excellence, with contributions from a global diaspora of culinary creators. Restaurants such as Zahav, Kalaya and Mawn – which serve Israeli, Thai and Cambodian food, respectively – are surely eyeing their prospects for a starry future.

That Boston and Philadelphia’s tourism boards likely paid for the pleasure of the guide visiting their cities has been a topic of discussion among food cognoscenti. Reportedly, the Atlanta Tourism Board paid nearly $1 million for Michelin to visit their city. Is Michelin merely a well-regarded shakedown? A few stars in exchange for a million dollars?

After indirectly footing that big bill, what can local diners look forward to in the wake of Michelin awards scattering across the Northeast?

Since Michelin restaurants are notoriously difficult to get into – the award invariably prompts a surge in customers and reservations – the enhanced reputation of the restaurants might translate to price increases for diners.

Starred restaurants will also likely feel tremendous pressure to maintain high food quality and service, and this too can add to cost – particularly in an era of tariffs on foreign ingredients and alcohols.

Diners won’t escape unscathed. Industry officials suggest that Michelin stars add an average of $100 per diner per star. But, on the upside, diners may be able to gawk at local and international celebrities at dinner, since hanging out at Michelin-starred establishments has long been a celebrity preoccupation.

So if you have a favorite hot restaurant in Philadelphia, better make that reservation immediately, before a Michelin star makes it impossible to get in.

Read more of our stories about Philadelphia.

The Conversation

Tulasi Srinivas does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. The Michelin Guide is Eurocentric and elitist − yet it will soon be an arbiter of culinary excellence in Philly – https://theconversation.com/the-michelin-guide-is-eurocentric-and-elitist-yet-it-will-soon-be-an-arbiter-of-culinary-excellence-in-philly-256667