Student cheating dominates talk of generative AI in higher ed, but universities and tech companies face ethical issues too

Source: The Conversation – USA (3) – By Jeffrey C. Dixon, Professor of Sociology, College of the Holy Cross

A wider look at ethical questions around generative AI brings in much more than academic integrity. Huaxia Zhou via Getty Images

Debates about generative artificial intelligence on college campuses have largely centered on student cheating. But focusing on cheating overlooks a larger set of ethical concerns that higher education institutions face, from the use of copyrighted material in large language models to student privacy.

As a sociologist who teaches about AI and studies the impact of this technology on work, I am well acquainted with research on the rise of AI and its social consequences. And when one looks at ethical questions from multiple perspectives – those of students, higher education institutions and technology companies – it is clear that the burden of responsible AI use should not fall entirely on students’ shoulders.

I argue that responsibility, more generally, begins with the companies behind this technology and needs to be shouldered by higher education institutions themselves.

To ban or not to ban generative AI

Let’s start where some colleges and universities did: banning generative AI products, such as ChatGPT, partly over student academic integrity concerns.

While there is evidence that students inappropriately use this technology, banning generative AI ignores research indicating it can improve college students’ academic achievement. Studies have also shown generative AI may have other educational benefits, such as for students with disabilities. Furthermore, higher education institutions have a responsibility to make students ready for AI-infused workplaces.

Given generative AI’s benefits and its widespread student use, many colleges and universities today have integrated generative AI into their curricula. Some higher education institutions have even provided students free access to these tools through their school accounts. Yet I believe these strategies involve additional ethical considerations and risks.

As with previous waves of technology, the adoption of generative AI can exacerbate inequalities in education, given that not all students will have access to the same technology. If schools encourage generative AI use without providing students with free access, there will be a divide between students who can pay for a subscription and those who use free tools.

On top of this, students using free tools have few privacy guarantees in the U.S. When they use these tools – even as simple as “Hey ChatGPT, can you help me brainstorm a paper idea?” – students are producing potentially valuable data that companies use to improve their models. By contrast, paid versions can offer more data protections and clearer privacy guidelines.

Higher education institutions can address equity concerns and help protect student data by seeking licenses with vendors that address student privacy. These licenses can provide students with free access to generative AI tools and specify that student data is not to be used to train or improve models. However, they are not panaceas.

Who’s responsible now?

In “Teaching with AI,” José Antonio Bowen and C. Edward Watson argue that higher education institutions need to rethink their approach to academic integrity. I agree with their assessment, but for ethical reasons not covered in their book: Integrating generative AI into the curriculum through vendor agreements involves higher education institutions recognizing tech companies’ transgressions and carefully considering the implications of owning student data.

To begin, I find the practice of penalizing students for “stealing” words from large language models to write papers ethically difficult to reconcile with tech companies’ automated “scraping” of websites, such as Wikipedia and Reddit, without citation. Big tech companies have used copyrighted material – some of it allegedly taken from piracy websites – to train the large language models that power chatbots. Although the two actions – asking a chatbot to write an essay versus training it on copyrighted material – are not exactly the same, they both have a component of ethical responsibility. For technology companies, ethical issues such as this are typically raised only in lawsuits.

For institutions of higher education, I think these issues should be raised prior to signing AI vendor licenses. As a Chronicle of Higher Education article suggests, colleges and universities should vet AI model outputs as they would student papers. If they have not done so prior to signing vendor agreements, I see little basis for them to pursue traditional “academic integrity” violations for alleged student plagiarism. Instead, higher education institutions should consider changes to their academic integrity policies.

Then there is the issue of how student data is handled under AI vendor agreements. One likely source of student concern is whether their school, as a commercial customer and data owner, logs interactions with identifiers and can pursue academic integrity charges and other matters on this basis.

The solution to this is simple: Higher education institutions can prominently display the terms and conditions of such agreements to members of their community. If colleges and universities are unwilling to do so, or if their leaders don’t understand the terms themselves, then maybe institutions need to rethink their AI strategies.

The above data privacy issues take on new meaning given the ways in which generative AI is currently being used, sometimes as “companions” with which people share highly personal information. OpenAI estimates that about 70% of ChatGPT consumer usage is for nonwork purposes. OpenAI’s CEO, Sam Altman, recognizes that people are turning to ChatGPT for “deeply personal decisions that include life advice, coaching and support.”

Although the long-term effects of using chatbots as companions or confidants is unknown, the recent case of a teen committing suicide while interacting with ChatGPT is a tragic reminder of generative AI’s risks and the importance of ensuring people’s personal security along with their privacy.

Formulating explicit statements that generative AI should be used only for academic purposes could help mitigate the risks related to students forming potentially damaging emotional attachments with chatbots. So, too, could reminders about campus mental health and other resources. Training students and faculty on all these matters and more can aid in promoting personally responsible AI use.

But colleges and universities cannot skirt their own responsibilities. At some point, higher education institutions may see that such responsibility is too heavy of a cross to bear and that their risk-mitigation strategies are essentially Band-Aids for a systemic problem.

The Conversation

Jeffrey C. Dixon is a faculty representative on the College of the Holy Cross Institutional Review of Artificial Intelligence Task Force.

ref. Student cheating dominates talk of generative AI in higher ed, but universities and tech companies face ethical issues too – https://theconversation.com/student-cheating-dominates-talk-of-generative-ai-in-higher-ed-but-universities-and-tech-companies-face-ethical-issues-too-268167

Why the chemtrail conspiracy theory lingers and grows – and why Tucker Carlson is talking about it

Source: The Conversation – USA – By Calum Lister Matheson, Associate Professor of Communication, University of Pittsburgh

Contrails have a simple explanation, but not everyone wants to believe it. AP Photo/Carolyn Kaster

Everyone has looked up at the clouds and seen faces, animals, objects. Human brains are hardwired for this kind of whimsy. But some people – perhaps a surprising number – look to the sky and see government plots and wicked deeds written there. Conspiracy theorists say that contrails – long streaks of condensation left by aircraft – are actually chemtrails, clouds of chemical or biological agents dumped on the unsuspecting public for nefarious purposes. Different motives are ascribed, from weather control to mass poisoning.

The chemtrails theory has circulated since 1996, when conspiracy theorists misinterpreted a U.S. Air Force research paper about weather modification, a valid topic of research. Social media and conservative news outlets have since magnified the conspiracy theory. One recent study notes that X, formerly Twitter, is a particularly active node of this “broad online community of conspiracy.”

I’m a communications researcher who studies conspiracy theories. The thoroughly debunked chemtrails theory provides a textbook example of how conspiracy theories work.

Boosted into the stratosphere

Conservative pundit Tucker Carlson, whose podcast averages over a million viewers per episode, recently interviewed Dane Wigington, a longtime opponent of what he calls “geoengineering.” While the interview has been extensively discredited and mocked in other media coverage, it is only one example of the spike in chemtrail belief.

Although chemtrail belief spans the political spectrum, it is particularly evident in Republican circles. U.S. Secretary of Health and Human Services Robert F. Kennedy Jr. has professed his support for the theory. U.S. Rep. Marjorie Taylor Greene of Georgia has written legislation to ban chemical weather control, and many state legislatures have done the same.

Online influencers with millions of followers have promoted what was once a fringe theory to a large audience. It finds a ready audience among climate change deniers and anti-deep state agitators who fear government mind control.

Heads I win, tails you lose

Although research on weather modification is real, the overwhelming majority of qualified experts deny that the chemtrail theory has any solid basis in fact. For example, geoengineering researcher David Keith’s lab posted a blunt statement on its website. A wealth of other resources exist online, and many of their conclusions are posted at contrailscience.com.

But even without a deep dive into the science, the chemtrail theory has glaring logical problems. Two of them are falsifiability and parsimony.

The philosopher Karl Popper explains that unless your conjecture can be proved false, it lies outside the realm of science.

According to psychologist Rob Brotherton, conspiracy theories have a classic “heads I win, tails you lose” structure. Conspiracy theorists say that chemtrails are part of a nefarious government plot, but its existence has been covered up by the same villains. If there was any evidence that weather modification was actually happening, that would support the theory, but any evidence denying chemtrails also supports the theory – specifically, the part that alleges a cover-up.

People who subscribe to the conspiracy theory consider anyone who confirms it to be a brave whistleblower and anyone who denies it to be foolish, evil or paid off. Therefore, no amount of information could even hypothetically disprove it for true believers. This denial makes the theory nonfalsifiable, meaning it’s impossible to disprove. By contrast, good theories are not false, but they must also be constructed in such a way that if they were false, evidence could show that.

Nonfalsifiable theories are inherently suspect because they exist in a closed loop of self-confirmation. In practice, theories are not usually declared “false” based on a single test but are taken more or less seriously based on the preponderance of good evidence and scientific consensus. This approach is important because conspiracy theories and disinformation often claim to falsify mainstream theories, or at least exploit a poor understanding of what certainty means in scientific methods.

Like most conspiracy theories, the chemtrail story tends not to meet the criteria of parsimony, also known as Occam’s razor, which suggests that the more suppositions a theory requires to be true, the less likely it actually is. While not perfect, this concept can be an important way to think about probability when it comes to conspiracy theories. Is it more likely that the government is covering up a massive weather program, mind-control program or both that involve thousands or millions of silent, complicit agents, from the local weather reporter to the Joint Chiefs of Staff, or that we’re seeing ice crystals from plane engines?

Of course, calling something a “conspiracy theory” does not automatically invalidate it. After all, real conspiracies do exist. But it’s important to remember scientist and science communicator Carl Sagan’s adage that “extraordinary claims require extraordinary evidence.” In the case of chemtrails, the evidence just isn’t there.

Scientists explain how humans are susceptible to believing conspiracy theories.

Psychology of conspiracy theory belief

If the evidence against it is so powerful and the logic is so weak, why do people believe the chemtrail conspiracy theory? As I have argued in my new book, “Post-Weird: Fragmentation, Community, and the Decline of the Mainstream,” conspiracy theorists create bonds with each other through shared practices of interpreting the world, seeing every detail and scrap of evidence as unshakable signs of a larger, hidden meaning.

Uncertainty, ambiguity and chaos can be overwhelming. Conspiracy theories are symptoms, ad hoc attempts to deal with the anxiety caused by feelings of powerlessness in a chaotic and complicated world where awful things like tornadoes, hurricanes and wildfires can happen seemingly at random for reasons that even well-informed people struggle to understand. When people feel overwhelmed and helpless, they create fantasies that give an illusion of mastery and control.

Although there are liberal chemtrail believers, aversion to uncertainty might explain why the theory has become so popular with Carlson’s audience: Researchers have long argued that authoritarian, right-wing beliefs have a similar underlying structure.

On some level, chemtrail theorists would rather be targets of an evil conspiracy than face the limits of their knowledge and power, even though conspiracy beliefs are not completely satisfying. Sigmund Freud described a fort-da (“gone-here”) game played by his grandson where he threw away a toy and dragged it back on a string, something Freud interpreted as a simulation of control when the child had none. Conspiracy theories may serve a similar purpose, allowing their believers to feel that the world isn’t really random and that they, the ones who see through the charade, really have some control over it. The grander the conspiracy, the more brilliant and heroic the conspiracy theorists must be.

Conspiracies are dramatic and exciting, with clear lines of good and evil, whereas real life is boring and sometimes scary. The chemtrail theory is ultimately prideful. It’s a way for theorists to feel powerful and smart when they face things beyond their comprehension and control. Conspiracy theories come and go, but responding to them in the long term means finding better ways to embrace uncertainty, ambiguity and our own limits alongside a new embrace of the tools we do have: logic, evidence and even humility.

The Conversation

Calum Lister Matheson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why the chemtrail conspiracy theory lingers and grows – and why Tucker Carlson is talking about it – https://theconversation.com/why-the-chemtrail-conspiracy-theory-lingers-and-grows-and-why-tucker-carlson-is-talking-about-it-269770

Blue Origin’s New Glenn rocket landed its booster on a barge at sea – an achievement that will broaden the commercial spaceflight market

Source: The Conversation – USA – By Wendy Whitman Cobb, Professor of Strategy and Security Studies, Air University

Blue Origin’s New Glenn rocket lifted off for its second orbital flight on Nov. 13, 2025. AP Photo/John Raoux

Blue Origin’s New Glenn rocket successfully made its way to orbit for the second time on Nov. 13, 2025. Although the second launch is never as flashy as the first, this mission is still significant in several ways.

For one, it launched a pair of NASA spacecraft named ESCAPADE, which are headed to Mars orbit to study that planet’s magnetic environment and atmosphere. The twin spacecraft will first travel to a Lagrange point, a place where the gravity between Earth, the Moon and the Sun balances. The ESCAPADE spacecraft will remain there until Mars is in better alignment to travel to.

And two, importantly for Blue Origin, New Glenn’s first stage booster successfully returned to Earth and landed on a barge at sea. This landing allows the booster to be reused, substantially reducing the cost to get to space.

Blue Origin launched its New Glenn rocket and landed the booster on a barge at sea on Nov. 13, 2025.

As a space policy expert, I see this launch as a positive development for the commercial space industry. Even though SpaceX has pioneered this form of launch and reuse, New Glenn’s capabilities are just as important.

New Glenn in context

Although Blue Origin would seem to be following in SpaceX’s footsteps with New Glenn, there are significant differences between the two companies and their rockets.

For most launches today, the rocket consists of several parts. The first stage helps propel the rocket and its spacecraft toward space and then drops away when its fuel is used up. A second stage then takes over, propelling the payload all the way to orbit.

While both New Glenn and Falcon Heavy, SpaceX’s most powerful rocket currently available, are partially reusable, New Glenn is taller, more powerful and can carry a greater amount of payload to orbit.

Blue Origin plans to use New Glenn for a variety of missions for customers such as NASA, Amazon and others. These will include missions to Earth’s orbit and eventually to the Moon to support Blue Origin’s own lunar and space exploration goals, as well as NASA’s.

NASA’s Artemis program, which endeavors to return humans to the Moon, is where New Glenn may become important. In the past several months, several space policy leaders, as well as NASA officials, have expressed concern that Artemis is progressing too slowly. If Artemis stagnates, China may have the opportunity to leap ahead and beat NASA and its partners to the lunar south pole.

These concerns stem from problems with two rockets that could potentially bring Americans back to the Moon: the space launch system and SpaceX’s Starship. NASA’s space launch system, which will launch astronauts on its Orion crew vehicle, has been criticized as too complex and costly. SpaceX’s Starship is important because NASA plans to use it to land humans on the Moon during the Artemis III mission. But its development has been much slower than anticipated.

In response, Blue Origin has detailed some of its lunar exploration plans. They will begin with the launch of its uncrewed lunar lander, Blue Moon, early next year. The company is also developing a crewed version of Blue Moon that it will use on the Artemis V mission, the planned third lunar landing of humans.

Blue Origin officials have said they are in discussions with NASA over how they might help accelerate the Artemis program.

New Glenn’s significance

New Glenn’s booster landing makes this most recent launch quite significant for the company. While it took SpaceX several tries to land its first booster, Blue Origin has achieved this feat on only the second try. Landing the boosters – and, more importantly, reusing them – has been key to reducing the cost to get to space for SpaceX, as well as others such as Rocket Lab.

That two commercial space companies now have orbital rockets that can be partially reused shows that SpaceX’s success was no fluke.

With this accomplishment, Blue Origin has been able to build on its previous experience and success with its suborbital rocket, New Shepard. Launching from Blue Origin facilities in Texas since 2015, New Shepard has taken people and cargo to the edge of space, before returning to its launch site under its own power.

A short, wide rocket lifts off from a launchpad.
Blue Origin’s suborbital rocket, New Shepard.
Joe Raedle/Getty Images

New Glenn is also significant for the larger commercial space industry and U.S. space capabilities. It represents real competition for SpaceX, especially its Starship rocket. It also provides more launch options for NASA, the U.S. government and other commercial customers, reducing reliance on SpaceX or any other launch company.

In the meantime, Blue Origin is looking to build on the success of New Glenn’s launch and its booster landing. New Glenn will next launch Blue Origin’s Blue Moon uncrewed lander in early 2026.

This second successful New Glenn launch will also contribute to the rocket’s certification for national security space launches. This accomplishment will allow the company to compete for contracts to launch sensitive reconnaissance and defense satellites for the U.S. government.

Blue Origin will also need to increase its number of launches and reduce the time between them to compete with SpaceX. SpaceX is on pace for between 165 and 170 launches in 2025 alone. While Blue Origin may not be able to achieve that remarkable cadence, to truly build on New Glenn’s success it will need to show it can scale up its launch operations.

The Conversation

Wendy Whitman Cobb is affiliated with the US School of Advanced Air and Space Studies. Her views are her own and do not necessarily reflect the views of the Department of Defense or any of its components. Mention of trade names, commercial products, or organizations do not imply endorsement by the U.S. Government, and the appearance of external hyperlinks does not constitute DoD endorsement of the linked websites, or the information, products or services therein.

ref. Blue Origin’s New Glenn rocket landed its booster on a barge at sea – an achievement that will broaden the commercial spaceflight market – https://theconversation.com/blue-origins-new-glenn-rocket-landed-its-booster-on-a-barge-at-sea-an-achievement-that-will-broaden-the-commercial-spaceflight-market-269786

Why two tiny mountain peaks became one of the internet’s most famous images

Source: The Conversation – USA (2) – By Christopher Schaberg, Director of Public Scholarship, Washington University in St. Louis

The icon has various iterations, but all convey the same meaning: an image should be here. Christopher Schaberg, CC BY-SA

It’s happened to you countless times: You’re waiting for a website to load, only to see a box with a little mountain range where an image should be. It’s the placeholder icon for a “missing image.”

But have you ever wondered why this scene came to be universally adopted?

As a scholar of environmental humanities, I pay attention to how symbols of wilderness appear in everyday life.

The little mountain icon – sometimes with a sun or cloud in the background, other times crossed out or broken – has become the standard symbol, across digital platforms, to signal something missing or something to come. It appears in all sorts of contexts, and the more you look for this icon, the more you’ll see it.

You click on it in Microsoft Word or PowerPoint when you want to add a picture. You can purchase an ironic poster of the icon to put on your wall. The other morning, I even noticed a version of it in my Subaru’s infotainment display as a stand-in for a radio station logo.

So why this particular image of the mountain peaks? And where did it come from?

Arriving at the same solution

The placeholder icon can be thought of as a form of semiotic convergence, or when a symbol ends up meaning the same thing in a variety of contexts. For example, the magnifying glass is widely understood as “search,” while the image of a leaf means “eco-friendly.”

It’s also related to something called “convergent design evolution,” or when organisms or cultures – even if they have little or no contact – settle on a similar shape or solution for something.

In evolutionary biology, you can see convergent design evolution in bats, birds and insects, who all utilize wings but developed them in their own ways. Stilt houses emerged in various cultures across the globe as a way to build durable homes along shorelines and riverbanks. More recently, engineers in different parts of the world designed similar airplane fuselages independent of one another.

For whatever reason, the little mountain just worked across platforms to evoke open-ended meanings: Early web developers needed a simple shorthand way to present that something else should or could be there.

Depending on context, a little mountain might invite a user to insert a picture in a document; it might mean that an image is trying to load, or is being uploaded; or it could mean an image is missing or broken.

Down the rabbit hole on a mountain

But of the millions of possibilities, why a mountain?

In 1994, visual designer Marsh Chamberlain created a graphic featuring three colorful shapes as a stand-in for a missing image or broken link for the web browser Netscape Navigator. The shapes appeared on a piece of paper with a ripped corner. Though the paper with the rip will sometimes now appear with the mountain, it isn’t clear when the square, circle and triangle became a mountain.

A generic camera dial featuring various modes, with the 'landscape mode' – represented by two little mountain peaks – highlighted.
Two little mountain peaks are used to signal ‘landscape mode’ on many SLR cameras.
Althepal/Wikimedia Commons, CC BY

Users on Stack Exchange, a forum for developers, suggest that the mountain peak icon may trace back to the “landscape mode” icon on the dials of Japanese SLR cameras. It’s the feature that sets the aperture to maximize the depth of field so that both the foreground and background are in focus.

The landscape scene mode – visible on many digital cameras in the 1990s – was generically represented by two mountain peaks, with the idea that the camera user would intuitively know to use this setting outdoors.

Another insight emerged from the Stack Exchange discussion: The icon bears a resemblance to the Microsoft XP wallpaper called “Bliss.” If you had a PC in the years after 2001, you probably recall the rolling green hills with blue sky and wispy clouds.

The stock photo was taken by National Geographic photographer Charles O’Rear. It was then purchased by Bill Gates’ digital licensing company Corbis in 1998. The empty hillside in this picture became iconic through its adoption by Windows XP as its default desktop wallpaper image.

A colorful stock photo of green rolling hills, a blue sky and clouds.
If you used a PC at the turn of the 21st century, you probably encountered ‘Bliss.’
Wikimedia Commons

Mountain riddles

“Bliss” became widely understood as the most generic of generic stock photos, in the same way the placeholder icon became universally understood to mean “missing image.” And I don’t think it’s a coincidence that they both feature mountains or hills and a sky.

Mountains and skies are mysterious and full of possibilities, even if they remain beyond grasp.

Consider Japanese artist Hokusai’s “36 Views of Mount Fuji,” which were his series of paintings from the 1830s – the most famous of which is probably “The Great Wave off Kanagawa,” where a tiny Mount Fuji can be seen in the background. Each painting features the iconic mountain from different perspectives and is full of little details; all possess an ambiance of mystery.

A painting of a large rowboat manned by people on rolling waves with a large mountain in the background.
‘Tago Bay near Ejiri on the Tokaido,’ from Hokusai’s series ‘36 Views of Mount Fuji.’
Heritage Art/Heritage Images via Getty Images

I wouldn’t be surprised if the landscape icon on those Japanese camera dials emerged as a minimalist reference to Mount Fuji, Japan’s highest mountain. From some perspectives, Mount Fuji rises behind a smaller incline. And the Japanese photography company Fujifilm even borrowed the namesake of that mountain for their brand.

The enticing aesthetics of mountains also reminded me of the environmental writer Gary Snyder’s 1965 translation of Han Shan’s “Cold Mountain Poems.” Han Shan – his name literally means “Cold Mountain” – was a Chinese Buddhist poet who lived in the late eighth century. “Shan” translates as “mountain” and is represented by the Chinese character 山, which also resembles a mountain.

Han Shan’s poems, which are little riddles themselves, revel in the bewildering aspects of mountains:

Cold Mountain is a house
Without beams or walls.
The six doors left and right are open
The hall is a blue sky.
The rooms are all vacant and vague.
The east wall beats on the west wall
At the center nothing.

The mystery is the point

I think mountains serve as a universal representation of something unseen and longed for – whether it’s in a poem or on a sluggish internet browser – because people can see a mountain and wonder what might be there.

The placeholder icon does what mountains have done for millennia, serving as what the environmental philosopher Margret Grebowicz describes as an object of desire. To Grebowicz, mountains exist as places to behold, explore and sometimes conquer.

The placeholder icon’s inherent ambiguity is baked into its form: Mountains are often regarded as distant, foreboding places. At the same time, the little peaks appear in all sorts of mundane computing circumstances. The icon could even be a curious sign of how humans can’t help but be “nature-positive,” even when on computers or phones.

This small icon holds so much, and yet it can also paradoxically mean that there is nothing to see at all.

Viewing it this way, an example of semiotic convergence becomes a tiny allegory for digital life writ large: a wilderness of possibilities, with so much just out of reach.

The Conversation

Christopher Schaberg does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why two tiny mountain peaks became one of the internet’s most famous images – https://theconversation.com/why-two-tiny-mountain-peaks-became-one-of-the-internets-most-famous-images-268169

Don’t let food poisoning crash your Thanksgiving dinner

Source: The Conversation – USA (3) – By Lisa Cuchara, Professor of Biomedical Sciences, Quinnipiac University

Undercooked turkey is a leading cause of foodborne illness on Thanksgiving. AlexRaths/iStock via Getty Images Plus

Thanksgiving is a time for family, friends and feasting. However, amid the joy of gathering and indulging in delicious food, it is essential to keep food safety in mind. Foodborne illnesses can quickly put a damper on your celebrations.

As an immunologist and infectious disease specialist, I study how germs spread – and how to prevent them from doing so. In my courses, I teach my students how to reduce microbial risks, including those tied to activities such as hosting a big Thanksgiving gathering, without becoming germophobes.

Foodborne illnesses sicken 48 million Americans – 1 in 6 people – each year. Holiday meals such as Thanksgiving pose special risks because these spreads often involve large quantities, long prep times, buffet-style serving and mingling guests. Such conditions create many opportunities for germs to spread.

This, in turn, invites a slew of microbial guests such as Salmonella
and Clostridium perfringens. Most people recover from infections with foodborne bacteria, but each year around 3,000 Americans die from the illnesses they cause. More routinely, these bugs can cause nausea, vomiting, stomach cramps and diarrhea within hours to a couple of days after being consumed – which are no fun at a holiday celebration.

Foods most likely to cause holiday illness

Most foodborne illnesses come from raw or undercooked food and foods left in the so-called danger zone of cooking temperature – 40 degrees to 140 degrees Fahrenheit – in which bacteria multiply rapidly. Large-batch cooking without proper reheating or storage as well as cross contamination of foods during preparation can also cause disease.

A turkey on a counter being stuffed by two sets of hands.
Put that bird right in the oven as soon as you’ve stuffed it to keep bacteria from multiplying inside.
kajakiki/E+ via Getty Images

Not all dishes pose the same risk. Turkey can harbor Salmonella, Campylobacter and Clostridium perfringens. Undercooked turkey remains a leading cause of Thanksgiving-related illness. Raw turkey drippings can also easily spread bacteria onto hands, utensils and counters. And don’t forget the stuffing inside the bird. While the turkey may reach a safe internal temperature, the stuffing often does not, making it a higher-risk dish.

Leftovers stored too long, reheated improperly or cooled slowly also bring hazards. If large pieces of roasted turkey aren’t divided and cooled quickly, any Clostridium perfringens they contain might have time to produce toxins. This increases the risk of getting sick from snacking on leftovers – even reheated leftovers, since these toxins are not killed by heat.

Indeed, each November and December outbreaks involving this bacterium spike, often due to encounters with turkey and roast beef leftovers.

Don’t wash the turkey!

Washing anything makes it cleaner and safer, right? Not necessarily.

Many people think washing their turkey will remove bacteria. However, it’s pretty much impossible to wash bacteria off a raw bird, and attempting to do so actually increases cross contamination and your risk of foodborne illness.

Since 2005, federal food safety agencies have advised against washing turkey or chicken. Despite this, a 2020 survey found that 78% of people still reported rinsing their turkey before cooking – often because older recipes or family habits encourage it.

When you rinse raw poultry, water can splash harmful bacteria around your kitchen, contaminating counter tops, utensils and nearby foods. If you do choose to wash turkey, it’s critical to immediately clean and disinfect the sink and surrounding area. A 2019 USDA study found that 60% of people who washed their poultry had bacteria in their sink afterward – and 14% had bacteria in the sink even after cleaning it.

Family enjoying Thanksgiving meal
A few food prep precautions can help keep the holiday free of gastrointestinal distress.
Drazen Zigic/iStock via Getty Images Plus

Food prep tips for a safe and healthy Thanksgiving

Wash your hands regularly. Before cooking and after touching raw meat, poultry or eggs, wash your hands thoroughly with soap and water for at least 20 seconds. Improper handwashing by people handling food is a major source of bacterial contamination with Staphylococcus aureus. This bacterium’s toxins are hard to break down, even after cooking or reheating.

Thaw turkey safely. The safest way to thaw a turkey is in the refrigerator. Allow 24 hours per 4-5 pounds. There’s also a faster method, which involves submerging the turkey in cold water and changing the water every 30 minutes – but it’s not as safe because it requires constant attention to ensure the water temperature stays below 40 F in order to prevent swift bacteria growth.

Stuff your turkey immediately before cooking it. Stuffing the turkey the night before is risky because it allows bacteria in the stuffing to multiply overnight. The toxins produced by those bacteria do not break down upon cooking, and the interior of the stuffing may not get hot enough to kill those bacteria. The USDA specifically warns against prestuffing. So cook stuffing separately, if possible, or if you prefer it inside the bird, stuff immediately before roasting, making sure it reaches 165 F.

Cook food to the right temperature. A thermometer is your best friend – use it to ensure turkey and stuffing both reach 165 F. Check casseroles and other dishes too. It’s best not to rely on an internal pop-up thermometer, since they can be inaccurate, imprecise and could even malfunction.

Avoid cross contamination. Use separate cutting boards for raw meat, vegetables and bread. Change utensils and plates after handling raw meat before using them for cooked foods.

Keep food at safe temperatures. Serve hot foods immediately, and make sure hot foods are served above 140 F and cold dishes below 40 F to keep them out of the microbial danger zone.

Be cautious with buffet-style serving. Limit food time on the table to two hours or less – longer than that, any bacteria present can double every 20 minutes. Provide dedicated serving utensils, and avoid letting guests serve with utensils they have eaten from.

Be mindful of expiration dates. Don’t forget to check dates on food items to make sure that what you are serving isn’t expired or left from last Thanksgiving.

Educate guests on food safety. Remind guests to wash their hands before preparing or serving food, and politely discourage double-dipping or tasting directly from communal dishes.

Thanksgiving should be a time of gratitude, not gastrointestinal distress. By following these simple food safety tips, you can help ensure a safe and healthy holiday.

The Conversation

Lisa Cuchara does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Don’t let food poisoning crash your Thanksgiving dinner – https://theconversation.com/dont-let-food-poisoning-crash-your-thanksgiving-dinner-269320

Making progress is more than making policy – what Mamdani can learn from de Blasio about the politics of urban progress

Source: The Conversation – USA – By Nicole West Bassoff, Posdoctoral Research Fellow in Public Policy, University of Virginia

New York City Mayor-elect Zohran Mamdani speaks in San Juan, Puerto Rico, on Nov. 8, 2025. AP Photo/Alejandro Granadillo

After a decisive election win, Zohran Mamdani will become mayor of New York on Jan. 1, 2026. His impressive grassroots campaign made big promises targeted at working-class New Yorkers: universal child care, rent freezes and faster, free buses.

Nevertheless, questions remain about whether Mamdani’s policies are economically and practically feasible.

Critics, from President Donald Trump to establishment Democrats, condemned his platform as radical and unrealistic. And The New York Times warns that Mamdani risks becoming the latest “big-city civic leader promising bold, progressive change” to “mostly deliver disappointment.” Among past offenders, it lists former New York Mayor Bill de Blasio.

But the comparison to de Blasio reveals a paradox.

As candidate for mayor in 2013, after the Occupy Wall Street movement against economic inequality, de Blasio campaigned on the core progressive tenet of tackling inequality through social welfare and the redistribution of wealth.

De Blasio’s promises – strikingly similar to Mamdani’s – included universal pre-K, rent freezes and a US$15 minimum wage. De Blasio delivered on all three.

So what was the “disappointment” the Times so confidently cites?

New Yorkers today remember de Blasio not for his policies but for his persistent unpopularity.

Over two terms, de Blasio alienated many New Yorkers and became a pariah among Democratic politicians. A committed progressive, he is perceived to have lost touch with the movements and communities that he hoped to lead.

Maybe the question is not whether Mamdani’s policies are realistic, but what it actually takes to win over citizens with a progressive vision. De Blasio himself cautions that it takes more than policy. He recently said that he “often mistook good policy for good politics, a classic progressive error.”

As a scholar of public policy, I think that policy achievements are neither self-evident nor self-sustaining. In my research on urban governance, I have found that it takes continuous political work to maintain local belief in urban progress and its leaders.

Based on an analysis of de Blasio’s two terms, I have identified three key respects in which his politics fell short.

Keep up the ground game

Many accounts of de Blasio’s unpopularity emphasize his personal flaws. Open and humorous in person, he was described by critics – and even some supporters – as stubborn, didactic and self-righteous. His designs on higher offices – first governor, then president – repeatedly backfired.

But for someone elected with the support of progressives, de Blasio’s bigger problem was losing touch with local progressive politics. He missed the rise of the anti-corporate left in Queens in 2018, led by Rep. Alexandria Ocasio-Cortez – so much so that his team miscalculated and agreed to place an Amazon headquarters near her district.

And while de Blasio successfully ended his predecessor Mike Bloomberg’s racially discriminatory stop-and-frisk policingfeuding with the New York Police Department in the process – he later alienated progressives, including his own staff, with his tepid response to the Black Lives Matter protests in 2020.

A man in a coat points his finger at someone.
Many New Yorkers remember former Mayor Bill de Blasio for his unpopularity.
AP Photo/Seth Wenig

The contours of progressive politics can shift under one’s feet. But as a veteran of street-level politics, Mamdani has the skills to respond to, and keep shaping, the city’s progressive movement. A dynamic “ground game” – on the model of his walk of the length of Manhattan – will likely remain as important in governing as it was in campaigning.

Protect local autonomy

In New York, hostility between the city’s mayor and the governor is a time-honored tradition. De Blasio and former Gov. Andrew Cuomo famously took hostility to the extreme.

Early in de Blasio’s first term, while seeking state funding for universal pre-K, de Blasio angered Cuomo by insisting on funding it through a tax on the city’s wealthy. Lacking necessary state approval, de Blasio eventually accepted a different state funding source. Universal pre-K became de Blasio’s cornerstone achievement, but the lasting feud with Cuomo remained a problem, even compromising the city’s plans to address the COVID-19 pandemic.

Critics also thought de Blasio could have been tougher on Big Tech. Letting a Google-backed consortium run the city’s free Wi-Fi program without meaningful oversight left the city with a privacy scandal and serious financial deficits.

In trying to attract Amazon’s headquarters, de Blasio’s administration offended New Yorkers’ sensibilities by allowing the company to bypass local development review processes. Though famously byzantine, these processes were created to ensure local control over development decisions. One could not simply bulldoze them aside.

In another case, and to his credit, de Blasio was quick to see the need to regulate Uber’s explosive growth, but it took years to overcome the company’s aggressive opposition campaign.

Though some progressives wish mayors ruled the world, U.S. cities have traditionally depended on states, the federal government and private companies for capital and resources. As I and others have shown, and de Blasio’s experiences attest, these outside players can undermine the progressive ideal of a city that seeks to redistribute economic benefit.

Mayoral powers are limited, but Mamdani can use his popularity to protect New York City’s capacity for self-government from outside interference, while cooperating strategically with the state when necessary. Gov. Kathy Hochul’s endorsement of Mamdani, driven by a shared interest in universal child care, was a start. United, they stand a better chance of defending local – city and state – autonomy against threats from President Donald Trump.

Meanwhile, there is little evidence that it pays for cities to court private businesses with expensive incentives – a common but contested city practice. Instead, following mayors elsewhere, Mamdani might pressure tech companies to end union-busting practices and thereby ensure local workers’ right to organize.

Several people gather to watch a screen.
Supporters for Democratic mayoral candidate Zohran Mamdani watch returns during election night, Nov. 4, 2025, in New York.
AP Photo/Yuki Iwamura

Lead with the social compact

Though de Blasio delivered many progressive policies, he was unable to keep alive his campaign promise to end New York’s “tale of two cities” – the stark divide between extreme wealth and poverty.

A major, self-admitted failure was on homelessness, especially among single adults. Homelessness among this group grew despite increased spending on homeless services, creating the impression that de Blasio was insufficiently concerned with the welfare of his city’s most beleaguered residents.

Such inconsistencies loomed large in the public discussion. Over time, de Blasio’s administration could no longer convince the public that its energies were being channeled toward a coherent vision of progress.

I believe that urban governance is about clarifying the rights and responsibilities that urban residents can expect to have, what I think of as the social compact between the city and its subjects. De Blasio’s growing unpopularity weakened his ability to show that his policy achievements amounted to upholding a tacit progressive promise to guarantee basic economic rights for all.

Former New York Gov. Mario Cuomo, father of losing mayoral candidate Andrew Cuomo, often said: “You campaign in poetry. You govern in prose.” While campaigning, Mamdani offered a poetic vision for a new social compact in New York.

“City government’s job,” he has said, “is to make sure each New Yorker has a dignified life, not determine which New Yorkers are worthy of that dignity.”

Many commentators insist that Mamdani must now abandon poetry and deliver the policy. But that is only partly right.

New Yorkers will disagree about the details, but the election results suggest that they want to believe in the promise of a dignified life for all. Mamdani’s ability to lead New York City – and a wider post-Trump progressive movement – will be a matter of setting an example in rearticulating and reaffirming what that promise means, to him and to his city.

The Conversation

Nicole West Bassoff does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Making progress is more than making policy – what Mamdani can learn from de Blasio about the politics of urban progress – https://theconversation.com/making-progress-is-more-than-making-policy-what-mamdani-can-learn-from-de-blasio-about-the-politics-of-urban-progress-269062

Trump’s proposed cuts to work study threaten to upend a widely supported program that helps students offset college costs

Source: The Conversation – USA (2) – By Samantha Hicks, Assistant Vice President of Financial Aid and Scholarships, Coastal Carolina University

Work-study students often still have unmet financial needs, even after their 15- to 20-hour-per-week jobs fill in some of the gaps. champpix/iStock/Getty Images Plus

Work study works, doesn’t it?

Federal work study is a government program that gives colleges and universities approximately US$1 billion in subsidies each year to help pay students who work part-time jobs on and off campus. This program supports nearly 700,000 college students per year and is often an essential way students pay their expenses and remain in school.

The program has generally garnered broad bipartisan support since its creation in 1964.

Now, the Trump administration is proposing to cut $980 million from work-study programs. The government appropriated $1.2 billion to work study from October 2023 through September 2024.

The government typically subsidizes as much as 75% of a student’s work-study earnings, though that amount can vary. Colleges and universities make up the rest.

With no federal budget passed for fiscal year 2026 – meaning Oct. 1, 2025, through September 2026 – the future of work-study funding remains uncertain.

In May 2025, Russell Vought, director of the White House’s Office of Management and Budget, called work study a “poorly targeted program” that is a “handout to woke universities.”

As college enrollment experts with over 40 years of combined financial aid and admissions experience, we have seen how work study creates opportunities for both students and universities. We have also seen the need to change some parts of work study in order to maintain the program’s value in a shifting higher education landscape.

Work study’s roots

Congress established the Federal Work-Study Program in 1964 as part of the Economic Opportunity Act, which created programs to help poor Americans by providing more education and job-training opportunities.

Work study was one way to help colleges and universities create part-time jobs for poor students to work their way through college.

Today, part-time and full-time undergraduate students who have applied for federal financial aid and have unmet financial needs can apply for work-study jobs. Students in these positions typically work as research assistants, campus tour guides, tutors and more.

Students earn at least federal minimum wage – currently $7.25 an hour – in these part-time jobs, which typically take up 10 to 15 hours per week.

In 2022, the National Center for Education Statistics reported that 40% of full-time and 74% of part-time undergraduate students were also employed in both work-study and non-work-study jobs.

A person leans against a calculator that has a black graduation cap on top in a graphic image.
The federal government typically allocates more than $1 billion for the Federal Work-Study Program, covering about 75% of student workers’ wages.
Nuthawut Somsuk/iStock/Getty Images Plus

How work study helps students

Financial aid plays a critical role in a student’s ability to enroll in college, stay in school and graduate.

Cost and lack of financial aid are the most significant barriers to higher education enrollment, according to 2024 findings by the National Association of Student Financial Aid Administrators.

When students drop out of college because of cost, the consequences are significant both for the students and for the institutions they leave behind.

One other key factor in student retention is the sense of belonging. Research shows that students who feel connected to their campus communities are more likely to succeed in staying in school. We have found that work study also helps foster a student’s sense of belonging.

Work-study programs can also help students stay in school by offering them valuable career experience, often aligned with their academic interests.

Points of contention

Financial aid and enrollment professionals agree that work study helps students who need financial aid.

Still, some researchers have criticized the program for not meeting its intended purpose. For example, some nonpartisan research groups and think tanks have noted that the average amount a student earns from work study each year – approximately $2,300 – only covers a fraction of rising tuition costs.

Another issue is which students get to do work study. The government gives work-study money directly to institutions, not students. As universities and colleges have broad flexibility over the program, research has suggested that in some cases, lower-income students are actually less likely than higher-income students to receive a work-study job.

Other researchers criticize the lack of evidence showing work study is effective at helping students stay in school, graduate or pay their daily costs.

A final factor that prompts criticism is that full-time students who hold jobs often struggle to balance juggling work, school and other important parts of their lives.

Areas for possible change

Many students who are eligible for work study don’t know that they are eligible – or don’t know how to get campus jobs. There is no standard practice of how institutions award work study to students.

At some schools, the number of work-study jobs may be limited. If a student does not get a job, the school can reallocate the federal money to a different student.

Another option is for schools to carry over any unused money to students in the next academic year – though that doesn’t mean the same students will automatically get the money.

We think that schools can clear up this confusion about who receives federal work-study opportunities.

We also think that schools should explore how they are ensuring that eligible students receive work-study jobs.

Universities and colleges could also benefit from more proactively promoting work-study opportunities. For example, the University of Miami’s First Hires program educates students about work study, provides personalized outreach and supports career readiness through resume development and interview preparation.

Finally, colleges and universities could evaluate how work-study jobs align with students’ academic and career goals.

By creating clerical and professional roles within academic departments, schools can offer students relevant work experience that makes it easier for them to find work after graduation.

In an era of heightened scrutiny on student outcomes, reduced public funding and growing skepticism about the value of a four-year degree, we believe that universities could benefit from reimagining their financial aid strategies – especially work study.

The Conversation

Samantha Hicks is affiliated with the South Carolina Association of Financial Aid Administrators as current member and President-Elect and the Southern Association of Student Financial Aid Administrators as a current member and volunteer.

Amanda Craddock does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Trump’s proposed cuts to work study threaten to upend a widely supported program that helps students offset college costs – https://theconversation.com/trumps-proposed-cuts-to-work-study-threaten-to-upend-a-widely-supported-program-that-helps-students-offset-college-costs-266211

How a Colorado law school dug into its history to celebrate its unsung Black graduates

Source: The Conversation – USA (2) – By Rebecca Ciota, Assistant Teaching Professor, Law School, University of Colorado Boulder

The first known Black law student at the University of Colorado is pictured in a class photo from 1899. Courtesy of the University of Colorado Law School.

Class portraits line the hallways of the University of Colorado Law School, the faces of former students gazing down at the building’s current inhabitants. In a dimly lit recess in the library hangs the 1899 class portrait. Its year is incorrectly labeled as 1898, and the students are left unnamed.

In the photo, 20 men stand. Only one of them is Black. I can tell you that he was Franklin LaVeale Anderson, a successful Boulder, Colorado, businessman and landowner who entered the law school in 1896 as the university’s first known Black student.

But until recently, people working or studying at the university today knew his story.

It’s not much different for subsequent Black law students, whose names and accomplishments also remain largely unknown to the Colorado Law community. Many of the earliest Black students’ portraits are in the back of the basement. Their accomplishments were known by their families and their communities, but their former law school had done little to record these individuals who are part of its history.

Then, in 2024, inspired by an article published by Boston College Law School, Colorado Law decided to explore its own Black history.

I’m a librarian with more than nine years of professional experience in academic settings – and I never shy away from challenging research questions. I agreed to take on the project.

Searching the archives

Like Anderson, most of the Black alumni prior to 1968 were unknown to current law school staff who were hired after they graduated.

Due to student privacy regulations as well as a lack of demographic data prior to 1970, there is no easy way of identifying all students of a certain ethnic or racial background. So my research project began with those old class portraits hanging throughout the school.

I spent several hours squinting in dark corners and climbing onto study tables to find photographs and record class years. It was an imperfect science: Some of the class photos were missing, and not all students were photographed. In the end, I identified more than 210 Black students who had attended Colorado Law from 1899 to 2024.

A large brick building with blue skies and clouds in the background.
Portraits of University of Colorado Law School students and classes line the hallways of the school.
Courtesy of the University of Colorado Law School.

Among them was Franklin LaVeale Anderson. A quick internet search yielded his name.

I turned next to the university’s archives, where I pored through yearbooks and boxes of law school papers. In the archives, I read the memo to the Board of Regents recommending the students from the class of 1899 receive the Bachelors of Law degree.

Anderson’s name did not appear on the list. I never did discover why he did not receive his degree.

My visit to the archives brought about invaluable partnerships: One of the archivists, David Hayes, provided me with his unpublished research on marginalized groups at the university. This provided me with important context for what was happening at the university while each individual attended.

A black-and-white portrait of a Black man with text underneath that reads:
Franklin Henry Bryant earned his law degree from the University of Colorado Law School in 1907. He was the law school’s first known Black graduate and became the third Black attorney to pass the Colorado Bar Exam. Bryant went on to establish a firm in Denver.
Courtesy of University of Colorado Law School

I was also put in touch with the interim director of the University of Colorado Heritage Center, Mona Lambrecht, who has identified historical Black students that were not pictured in the class photos, including Colorado Law’s first known Black graduate, Franklin Henry Bryant, a member of the class of 1907. The Heritage Center has also done research on Black students from 1896 through the 1920s.

At this point of the research process, I had many names, biographical information and some context about the presence of marginalized groups at the university. I still needed more biographical information, so I began searching beyond the university.

The search continued

First, I collected genealogical information – birth and death dates, marriage certificates and places of residence – from FamilySearch, a subsidiary of the Church of Jesus Christ of Latter-day Saints. I chose FamilySearch because it provides resources similar to the better-known Ancestry.com, but for free.

The genealogical information began to tell a story.

I learned that Anderson, the university’s first known Black student, was born a free person in Missouri in 1859 while slavery was legal in Missouri. He took up barbering while still in Missouri. He moved to Minnesota at the age of 26 and married his first wife there. The couple moved to Boulder in 1892, where they purchased multiple lots in town. Anderson continued his work as a barber. Around 1900, he spent a few years in Fort Morgan and then moved to Sheridan, Wyoming, before settling in Los Angeles. He died there in 1918.

Next, I searched historical newspapers, primarily the Colorado Historic Newspapers collection, in hopes of finding more details about Colorado Law’s Black students. The student newspaper, The Silver & Gold, revealed that Anderson joined his classmates for a party at law professor William A. Murfree’s home.

Recounting history

An oval black-and-white photo of a Black man wearing a suit and tie.
Clarence Edward Blair earned his law degree from the University of Colorado Law School in 1956 and passed Colorado’s bar exam the same year.
Courtesy of the University of Colorado Law School.

Using the information I had gathered, in February 2025, I produced biographies about six of Colorado Law’s Black students from the law school’s beginnings in 1892 and the beginning of affirmative action at the university in 1968.

I continued my research toward the present and published “Uncovering What Was Always There: Black History at Colorado Law in October.

I hope that this research restores Anderson and other historical Black students to Colorado Law’s history. Perhaps, in the years to come, more staff and students at Colorado Law will know the name of the Black student in the 1899 class portrait. They will know he was the first known Black student at the university, and that he was a successful businessman and landowner.

The Conversation

Rebecca Ciota does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How a Colorado law school dug into its history to celebrate its unsung Black graduates – https://theconversation.com/how-a-colorado-law-school-dug-into-its-history-to-celebrate-its-unsung-black-graduates-268629

Why two tiny mountain peaks became one the internet’s most famous images

Source: The Conversation – USA (2) – By Christopher Schaberg, Director of Public Scholarship, Washington University in St. Louis

The icon has various iterations, but all convey the same meaning: an image should be here. Christopher Schaberg, CC BY-SA

It’s happened to you countless times: You’re waiting for a website to load, only to see a box with a little mountain range where an image should be. It’s the placeholder icon for a “missing image.”

But have you ever wondered why this scene came to be universally adopted?

As a scholar of environmental humanities, I pay attention to how symbols of wilderness appear in everyday life.

The little mountain icon – sometimes with a sun or cloud in the background, other times crossed out or broken – has become the standard symbol, across digital platforms, to signal something missing or something to come. It appears in all sorts of contexts, and the more you look for this icon, the more you’ll see it.

You click on it in Microsoft Word or PowerPoint when you want to add a picture. You can purchase an ironic poster of the icon to put on your wall. The other morning, I even noticed a version of it in my Subaru’s infotainment display as a stand-in for a radio station logo.

So why this particular image of the mountain peaks? And where did it come from?

Arriving at the same solution

The placeholder icon can be thought of as a form of semiotic convergence, or when a symbol ends up meaning the same thing in a variety of contexts. For example, the magnifying glass is widely understood as “search,” while the image of a leaf means “eco-friendly.”

It’s also related to something called “convergent design evolution,” or when organisms or cultures – even if they have little or no contact – settle on a similar shape or solution for something.

In evolutionary biology, you can see convergent design evolution in bats, birds and insects, who all utilize wings but developed them in their own ways. Stilt houses emerged in various cultures across the globe as a way to build durable homes along shorelines and riverbanks. More recently, engineers in different parts of the world designed similar airplane fuselages independent of one another.

For whatever reason, the little mountain just worked across platforms to evoke open-ended meanings: Early web developers needed a simple shorthand way to present that something else should or could be there.

Depending on context, a little mountain might invite a user to insert a picture in a document; it might mean that an image is trying to load, or is being uploaded; or it could mean an image is missing or broken.

Down the rabbit hole on a mountain

But of the millions of possibilities, why a mountain?

In 1994, visual designer Marsh Chamberlain created a graphic featuring three colorful shapes as a stand-in for a missing image or broken link for the web browser Netscape Navigator. The shapes appeared on a piece of paper with a ripped corner. Though the paper with the rip will sometimes now appear with the mountain, it isn’t clear when the square, circle and triangle became a mountain.

A generic camera dial featuring various modes, with the 'landscape mode' – represented by two little mountain peaks – highlighted.
Two little mountain peaks are used to signal ‘landscape mode’ on many SLR cameras.
Althepal/Wikimedia Commons, CC BY

Users on Stack Exchange, a forum for developers, suggest that the mountain peak icon may trace back to the “landscape mode” icon on the dials of Japanese SLR cameras. It’s the feature that sets the aperture to maximize the depth of field so that both the foreground and background are in focus.

The landscape scene mode – visible on many digital cameras in the 1990s – was generically represented by two mountain peaks, with the idea that the camera user would intuitively know to use this setting outdoors.

Another insight emerged from the Stack Exchange discussion: The icon bears a resemblance to the Microsoft XP wallpaper called “Bliss.” If you had a PC in the years after 2001, you probably recall the rolling green hills with blue sky and wispy clouds.

The stock photo was taken by National Geographic photographer Charles O’Rear. It was then purchased by Bill Gates’ digital licensing company Corbis in 1998. The empty hillside in this picture became iconic through its adoption by Windows XP as its default desktop wallpaper image.

A colorful stock photo of green rolling hills, a blue sky and clouds.
If you used a PC at the turn of the 21st century, you probably encountered ‘Bliss.’
Wikimedia Commons

Mountain riddles

“Bliss” became widely understood as the most generic of generic stock photos, in the same way the placeholder icon became universally understood to mean “missing image.” And I don’t think it’s a coincidence that they both feature mountains or hills and a sky.

Mountains and skies are mysterious and full of possibilities, even if they remain beyond grasp.

Consider Japanese artist Hokusai’s “36 Views of Mount Fuji,” which were his series of paintings from the 1830s – the most famous of which is probably “The Great Wave off Kanagawa,” where a tiny Mount Fuji can be seen in the background. Each painting features the iconic mountain from different perspectives and is full of little details; all possess an ambiance of mystery.

A painting of a large rowboat manned by people on rolling waves with a large mountain in the background.
‘Tago Bay near Ejiri on the Tokaido,’ from Hokusai’s series ‘36 Views of Mount Fuji.’
Heritage Art/Heritage Images via Getty Images

I wouldn’t be surprised if the landscape icon on those Japanese camera dials emerged as a minimalist reference to Mount Fuji, Japan’s highest mountain. From some perspectives, Mount Fuji rises behind a smaller incline. And the Japanese photography company Fujifilm even borrowed the namesake of that mountain for their brand.

The enticing aesthetics of mountains also reminded me of the environmental writer Gary Snyder’s 1965 translation of Han Shan’s “Cold Mountain Poems.” Han Shan – his name literally means “Cold Mountain” – was a Chinese Buddhist poet who lived in the late eighth century. “Shan” translates as “mountain” and is represented by the Chinese character 山, which also resembles a mountain.

Han Shan’s poems, which are little riddles themselves, revel in the bewildering aspects of mountains:

Cold Mountain is a house
Without beams or walls.
The six doors left and right are open
The hall is a blue sky.
The rooms are all vacant and vague.
The east wall beats on the west wall
At the center nothing.

The mystery is the point

I think mountains serve as a universal representation of something unseen and longed for – whether it’s in a poem or on a sluggish internet browser – because people can see a mountain and wonder what might be there.

The placeholder icon does what mountains have done for millennia, serving as what the environmental philosopher Margret Grebowicz describes as an object of desire. To Grebowicz, mountains exist as places to behold, explore and sometimes conquer.

The placeholder icon’s inherent ambiguity is baked into its form: Mountains are often regarded as distant, foreboding places. At the same time, the little peaks appear in all sorts of mundane computing circumstances. The icon could even be a curious sign of how humans can’t help but be “nature-positive,” even when on computers or phones.

This small icon holds so much, and yet it can also paradoxically mean that there is nothing to see at all.

Viewing it this way, an example of semiotic convergence becomes a tiny allegory for digital life writ large: a wilderness of possibilities, with so much just out of reach.

The Conversation

Christopher Schaberg does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why two tiny mountain peaks became one the internet’s most famous images – https://theconversation.com/why-two-tiny-mountain-peaks-became-one-the-internets-most-famous-images-268169

Supply-chain delays, rising equipment prices threaten electricity grid

Source: The Conversation – USA (2) – By Morgan Bazilian, Professor of Public Policy and Director, Payne Institute, Colorado School of Mines

High-voltage power lines run through an electrical substation in Florida. Joe Raedle/Getty Images

Two new data centers in Silicon Valley have been built but can’t begin processing information: The equipment that would supply them with electricity isn’t available.

It’s just one example of a crisis facing the U.S. power grid that can’t be solved simply by building more power lines, approving new power generation, or changing out grid software. The equipment needed to keep the grid running – transformers that regulate voltage, circuit breakers that protect against faults, high-voltage cables that carry power across regions, and steel poles that hold the network together – is hard to make, and materials are limited. Supply-chain bottlenecks are taking years to clear, delaying projects, inflating costs and threatening reliability.

Meanwhile, U.S. electricity demand is surging from several sources – electrification of home and business appliances and equipment, increased domestic manufacturing and growth in AI data centers. Without the right equipment, these efforts may take years longer and cost vast sums more than planners expect.

Not enough transformers to replace aging units

Transformers are key to the electricity grid: They regulate voltage as power travels across the wires, increasing voltage for more efficient long-distance transmission, and decreasing it for medium-distance travel and again for delivery to buildings.

The National Renewable Energy Laboratory estimates that the U.S. has about 60 million to 80 million high-voltage distribution transformers in service. More than half of them are over 33 years old – approaching or exceeding their expected lifespans.

Replacing them has become costly and time-consuming, with utilities reporting that transformers cost four to six times what they cost before 2022, in addition to the multiyear wait times.

To meet rising electricity demand, the country will need many more of them – perhaps twice as many as already exist.

A person drives a forklift near a group of large metal canisters.
Even smaller transformers like these are in high demand and short supply.
AP Photo/Mel Evans

The North American Electric Reliability Corporation says the lead time, the wait between placing an order and the product being delivered, hit roughly 120 weeks – more than two years – in 2024, with large power transformers taking as long as 210 weeks – up to four years. Even smaller transformers used to reduce voltage for distribution to homes and businesses are back-ordered as much as two years. Those delays have slowed both maintenance and new construction across much of the grid.

Transformer production depends heavily on a handful of materials and suppliers. The cores of most U.S transformers use grain-oriented electrical steel, a special type of steel with particular magnetic properties, which is made domestically only by Cleveland-Cliffs at plants in Pennsylvania and Ohio. Imports have long filled the gap: Roughly 80% of large transformers have historically been imported from Mexico, China and Thailand. But global demand has also surged, tightening access to steel, as well as copper, a soft metal that conducts electricity well and is crucial in wiring.

In partial recognition of these shortages, in April 2024, the U.S. Department of Energy delayed the enforcement of new energy-efficiency rules for transformers, to avoid making the situation worse.

Further slowing progress, these items cannot be mass-produced. They must be designed, tested and certified individually.

Even when units are built, getting them to where they are needed can be a feat. Large power transformers often weigh between 100 tons and 400 tons and require specialized transport – sometimes needing one of only about 10 suitable super-heavy-load railcars in the country. Those logistics alone can add months to a replacement project, according to the Department of Energy.

A massive railcar carries a large metal box.
Enormous railcars like this one in Germany are often needed to transport high-voltage transformers from where they are manufactured to where they are used.
Raimond Spekking via Wikimedia Commons, CC BY-SA

Other key equipment

Transformers are not the only grid machinery facing delays. A Duke University Nicholas Institute study, citing data from research and consulting firm Wood Mackenzie, shows that high-voltage circuit-breaker lead times reached about 151 weeks – nearly three years – by late 2023, roughly double pre-pandemic norms.

Facing similar delays are a range of equipment types, such as transmission cables that can handle high voltages, switchgear – a technical category that includes switches, circuit breakers and fuses – and insulators to keep electricity from going where it would be dangerous.

For transmission projects, equipment delays can derail timelines. High-voltage direct-current cables now take more than 24 months to procure, and offshore wind projects are particularly strained: Orders for undersea cables can take more than a decade to fill. And fewer than 50 cable-laying vessels operate worldwide, limiting how quickly manufacturers can install them, even once they are manufactured.

Supply-chain strains are hitting even the workhorse of the power grid: natural gas turbines. Manufacturers including Siemens Energy and GE Vernova have multiyear backlogs as new data centers, industrial electrification and peaking-capacity projects flood the order books. Siemens recently reported a record US$158 billion backlog, with some turbine frames sold out for as long as seven years.

A large industrial building.
The Cleveland-Cliffs steelworks in Ohio makes a specialized type of steel that is crucial for making transformers.
AP Photo/Sue Ogrocki

Alternate approaches

As a result of these delays, utility companies are finding other ways to meet demand, such as battery storage, actively managing electricity demand, upgrading existing equipment to produce more power, or even reviving decommissioned generation sites.

Some utilities are stockpiling materials for their own use or to sell to other companies, which can shrink delays from years to weeks.

There have been various other efforts, too. In addition to delaying transformer efficiency requirements, the Biden administration awarded Cleveland-Cliffs $500 million to upgrade its electrical-steel plants – but key elements of that grant were canceled by the Trump administration.

Utilities and industry groups are exploring standardized designs and modular substations to cut lead times – but acknowledge that those are medium-term fixes, not quick solutions.

Large government incentives, including grants, loans and guaranteed-purchase agreements, could help expand domestic production of these materials and supplies. But for now, the numbers remain stark: roughly 120 weeks for transformers, up to four years for large units, nearly three years for breakers and more than two years for high-voltage cable manufacturing. Until the underlying supply-chain choke points – steel, copper, insulation materials and heavy transport – expand meaningfully, utilities are managing reliability not through construction, but through choreography.

The Conversation

Kyri Baker receives funding from the U.S. Department of Energy, the National Science Foundation, and The Climate Innovation Collaboratory. She is a visiting researcher at Google DeepMind. The views and opinions expressed in this article are solely those of the author and do not necessarily reflect the views of the author’s employer or any affiliated organizations.

Morgan Bazilian does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Supply-chain delays, rising equipment prices threaten electricity grid – https://theconversation.com/supply-chain-delays-rising-equipment-prices-threaten-electricity-grid-269448