South Africa’s student debt trap: two options that could help resolve the problem

Source: The Conversation – Africa – By Michele Van Eck, Associate professor in the School of Law at University of the Witwatersrand, who specialises in the areas of contracts, legal ethics and education. , University of the Witwatersrand

Education is widely regarded as the road to a better life. Yet the rising cost of tertiary education means many students can only go to university if they get financial aid, bursaries or loans.

South Africa’s National Student Financial Aid Scheme (NSFAS) offers students bursaries or loans which provide allowances for tuition and registration fees, books, travel and accommodation. But this type of funding applies only under specific and limited conditions. Many students fall outside its scope.

Students who are not enrolled for a qualification that is approved by the Department of Higher Education, or who wish to study for a second undergraduate qualification, or who are studying at private institutions, don’t qualify to get the funding.

The result is that many students can’t keep up with paying their university fees. In 2025 South African universities collectively held about R9.3 billion (US$528 million) in student debt that had remained unpaid since 2023.

Universities have been trying different methods to pressure students and graduates to pay outstanding student debts. This has included withholding of degree certificates, academic transcripts and marks.

Universities require funding to operate effectively, pay staff and maintain infrastructure. But withholding academic documents from indebted students may prevent them from securing employment – the very means by which they could repay their debts. These practices, while commercially defensible, often have the opposite effect. According to Unesco, “student loans generally have catastrophic effects for students and families across the world”.

It seems reasonable to conclude that student debt collection practices may entrench poverty and make it harder for graduates to get jobs.

From recent court cases, it appears that this issue is especially pronounced in the legal profession. Law graduates face additional scrutiny, as admission to the profession requires not only academic qualifications but also proof of moral character. The Legal Practice Act 28 of 2014 mandates that candidates be “fit and proper” individuals, embodying values such as honesty, integrity and reliability. Outstanding debt may be seen as a contrast to the values of honesty and integrity.

Fulfilling financial obligations can indeed have a bearing on ethics (a field I study as a legal scholar). But as I argue in a recent paper, it’s necessary to distinguish between graduates who are unwilling to pay and those who are genuinely unable to.

I also propose a couple of ways this could be achieved so that universities get their money and graduates get their start in working life.

How universities collect debt

Unlike South Africa, some countries have taken steps to deal with the impact of student debt.

My paper highlights that, in the United States, several states don’t allow universities and colleges to withhold degree certificates and transcripts (records of academic activity) over unpaid fees. They recognise that those debt-collection practices hinder employment and make inequality worse. Instead, they promote other strategies, like repayment plans related to income, or policies for how to treat students who are experiencing hardship.

In the United Kingdom, universities are advised not to use academic sanctions to recover non-academic debts, such as accommodation fees. Consumer protection laws treat students as consumers, allowing them to challenge unfair contractual terms. If a university’s contract includes provisions to withhold degrees for unpaid fees, students may contest these clauses as unjust.

South Africa lacks similar legal safeguards. Each university sets its own rules. These range from students not being able to graduate unless all fees are paid, to the withholding of certificates from students not in good financial standing, and even preventing students from viewing their examination scripts if they owe money. Some examples may be found at the University of the Free State (page 27), University of Pretoria (page 16) and University of the Witwatersrand.

Law students face additional hurdles

In the legal profession, financial responsibility is often tied to ethical conduct. Lawyers manage trust accounts, client funds and sensitive legal matters. Integrity is non-negotiable.

However, the inability to pay student debts is not inherently dishonest. Some students fall into debt due to circumstances beyond their control, like family obligations, socio-economic conditions, unemployment or the sheer cost of education.

South African courts have grappled with outstanding student debts when it comes to admitting law graduates to the profession. The courts’ approach has been inconsistent.

In Ex Parte Tlotlego the court emphasised that poverty should not bar entry into the legal profession. It said courts should not require proof of debt repayment arrangements, which would be unfair to students from disadvantaged backgrounds.

But in Ex Parte Makamu the court found that a law graduate must still demonstrate how they intend to settle their debts to satisfy the ethical standards of honesty and integrity.

More recently, Ex Parte Galela reinforced this view. The court declined the application for admission because it wasn’t clear why the law graduate hadn’t paid off their debt. It suggested that financial irresponsibility could reflect poorly on the graduate’s character.

The courts’ approach and general student debt-collection practices often fail to differentiate between students who cannot pay and those who choose not to. This distinction is vital. A student who ignores their debt without justification may raise ethical concerns. But a student who is willing to pay yet lacks the financial means should not be penalised.

Solutions

The solution lies in balancing the financial interests of universities with the socio-economic realities of students. Student debts must be repaid, but repayment mechanisms must also be fair and sustainable.

There have been attempts to find a solution, such as the draft Student Relief Bill, which proposes setting up a Student Debt Relief Fund. But that might place unsustainable pressure on the economy.

I have another proposal: allowing graduates to receive their degree certificates regardless of outstanding debt, along with two legislative interventions. These are:

  1. Automatic garnishee orders: upon graduation, an automatic garnishee order (a court order directing an employer to deduct a certain amount from an employee’s income) could be placed on future salaries of a graduate. This would ensure that student debt is repaid over time.

  2. Amendment to the Prescription Act 68 of 1969: This could exclude student debt from prescribing (becoming too old to collect). Normally, such a debt would prescribe after three years. An amendment would allow universities to recover debts for the duration of graduates’ employment, not just within three years.

These measures would uphold the financial sustainability of universities while protecting the dignity and future employment prospects of graduates.

The Conversation

Michele Van Eck does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. South Africa’s student debt trap: two options that could help resolve the problem – https://theconversation.com/south-africas-student-debt-trap-two-options-that-could-help-resolve-the-problem-262555

Les robots collaboratifs, des collègues de travail comme les autres ?

Source: The Conversation – in French – By Thierry Colin, Professeur des universités en Sciences de gestion, Université de Lorraine

Le concept de cobot a été inventé par l’industrie automobile. L’objectif : créer des robots capables de travailler aux côtés des humains, sans risque d’insécurité. Gumpanat/Shutterstock

Les robots collaboratifs, ou cobots, ne remplacent pas seulement les humains : ils peuvent travaillent avec eux. Quel est leur impact sur la division du travail ?


Les robots sont omniprésents dans la production industrielle. Leur diffusion a toujours été au cœur d’enjeux humains, sociaux, économiques et en management, entraînant très tôt de nombreux questionnements.

Une nouvelle interrogation émerge aujourd’hui avec l’apparition des cobots. Capables de travailler non seulement à la place, mais aussi avec les humains au sein des ateliers, les robots collaboratifs sont-ils en train de devenir des collègues comme les autres ? Légers, flexibles, relativement accessibles et conviviaux… sont-ils susceptibles de remettre en cause les codes de la division du travail ?

Nos recherches récentes, basées sur des études de cas comprenant des entretiens et des observations en situation, ont permis de repérer quatre types d’usage des cobots : configuration simultanée, alternée, flexible ou coexistence. Elles rentrent dans le cadre du projet Impact « C-Shift » (Cobots in the Service of Human activity at work) qui vise à étudier l’impact de la mise en œuvre de dispositifs collaboratifs intelligents tels que les cobots dans le cadre des défis de l’industrie du futur.

Qu’est-ce qu’un cobot ?

Le terme cobot est créé par la contraction des termes anglais « collaborative » et « robot ». La paternité en est attribuée à des universitaires états-uniennes qui cherchent à la fois à limiter les troubles musculosquelettiques et à améliorer la productivité dans des usines de production automobile – Ford et General Motors.

Un robot collaboratif est un robot qui peut être installé dans le même espace de travail que les opérateurs humains, sans barrière de protection physique. Ils sont équipés de capteurs et de programmes déclenchant un ralentissement du mouvement ou un arrêt complet si un risque de collision est détecté. Ils sont capables de réaliser la plupart des opérations industrielles – visser, percer, poncer, souder.

Les cobots ne sont pas conçus pour des usages prédéfinis. Ils sont caractérisés avant tout par leur flexibilité. Facilement programmables grâce à des interfaces accessibles sur des tablettes, ils sont faciles à déplacer. Ils peuvent aussi bien mettre des produits cosmétiques dans des cartons, que faire du contrôle qualité à l’aide d’une caméra en bout de chaîne de production ou souder des pièces métalliques.

Marché multiplié par quatre d’ici 2030

Les cobots ne sont plus de simples prototypes de laboratoire. Ils sont désormais couramment utilisés dans des usines de toutes tailles et dans divers secteurs – automobile, logistique, santé, agroalimentaire –, bien que leur adoption reste encore loin d’être généralisée. La part des cobots dans les ventes mondiales de robots serait de l’ordre de 3 % et, selon ABI research, le marché des cobots pourrait être multiplié par quatre d’ici 2030.

Courbe
Prévision de croissance du marché mondial des robots collaboratifs (cobots) de 2020 à 2030 en millions de dollars états-uniens. »
Statista et ABI Research, FAL

Les cobots ne visent pas à remplacer les robots traditionnels en raison de plusieurs limitations :

  • Leur charge utile est réduite : leur légèreté et leur petite taille les empêchent de manipuler des objets lourds.

  • Leur vitesse d’exécution est volontairement limitée pour garantir la sécurité des humains qui travaillent autour. Cela freine leur productivité et les rend peu adaptés aux productions à très grande échelle.

  • Installés dans les mêmes espaces que les humains, les cobots soulèvent des problèmes de sécurité lorsqu’ils sont équipés d’outils dangereux – outil coupant ou torche de soudage.

Leur potentiel réside avant tout dans de nouveaux usages et une approche différente de l’automatisation. Ainsi, dans une PME spécialisée dans la tôlerie qui a fait l’objet d’une étude de cas, les soudures sont effectuées par un robot de soudure traditionnel pour les grandes séries récurrentes. Pour les séries de taille moyenne et par des soudeurs pour les petites séries ou des soudures trop complexes, elles sont effectuées par des cobots.

Quatre usages des cobots en usine

Si par définition les cobots ont la possibilité de travailler dans le même espace que des opérateurs humains, leurs usages ne sont pas nécessairement collaboratifs et nos recherches nous ont permis de distinguer quatre configurations.

Projet C-SHIFT, cobots et industrie du futur, de l’Université de Lorraine.
Université de Lorraine, Fourni par l’auteur

Coexistence avec l’humain

À un extrême, les cobots viennent se substituer aux opérateurs pour prendre en charge les gestes les plus pénibles et/ou gagner en productivité. On qualifie cet usage de coexistence, car il n’y a aucune interaction directe avec les humains.

Dans l’industrie automobile, des cobots vissent des pièces sous les véhicules, là où les positions sont particulièrement difficiles pour les opérateurs.




À lire aussi :
Comment rendre les robots plus adaptables aux besoins de leurs collaborateurs humains ?


Configuration simultanée

Dans la configuration simultanée, cobots et opérateurs travaillent ensemble en adaptant mutuellement leurs mouvements, côte à côte ou face à face. Si cette configuration est largement réalisable en laboratoire, elle est assez rare en condition réelle. La raison : le temps nécessaire à sa mise au point et sa certification sécurité obligatoire.

Chez un équipementier, le cobot positionne une colonne de direction pour automobile avec précision, évitant le port de charges et les chocs, et l’opérateur effectue des tâches de vissage sur la pièce.

Configuration alternée

La configuration alternée correspond à une situation où l’opérateur utilise le cobot, mais n’interagit pas directement avec lui. Il le programme pour une série de tâches, et le laisse travailler seul, dans un espace différent. Cette configuration garantit une meilleure sécurité pour l’opérateur humain. Ce dernier optimise la répartition du travail entre ce qu’il confie au cobot et ce qu’il continue de faire lui-même.

Chez un fabricant d’échangeurs thermiques pour la production de gaz industriels, les soudeurs délèguent aux cobots les soudures les plus simples et se concentrent sur des soudures plus complexes ou moins répétitives.

Configuration flexible

Dans la configuration flexible, la répartition du travail entre humains et cobots évolue au cours du temps, en fonction du plan de charge. Une fois la technologie maîtrisée, les cobots peuvent être réaffectés à différentes activités en fonction des exigences du moment. Le même cobot peut être utilisé pendant une période pour une activité de chargement de machines, puis réoutillé, il peut servir pour du ponçage, puis des opérations de peinture, etc.

L’efficacité réside dans la capacité des opérateurs, des techniciens et des ingénieurs à travailler ensemble pour inventer constamment de nouveaux usages. Cette configuration semble particulièrement adaptée à des PME dans lesquelles les séries sont courtes et variables.

Cobots et IA

Les cobots font partie d’un vaste mouvement technologique. Le contexte de l’industrie 5.0 et l’utilisation croissante de l’IA permettront aux cobots d’être encore plus adaptables, voire capables d’improvisation. Ils pourront être intégrés dans des « systèmes cyberphysiques de production », c’est-à-dire des systèmes très intégrés dans lesquels l’informatique contrôle directement les outils de production.

Cette intégration n’est pas évidente à ce stade. Si elle est possible, on peut penser que c’est la capacité à « combler les trous » de l’automatisation traditionnelle qui sera dominante, reléguant la flexibilité et l’aspect collaboratif au second plan. Inversement, le recours à l’intelligence artificielle peut aider au développement de configuration flexible misant sur la collaboration au sein des collectifs de travail.

Si ces évolutions technologiques ouvrent de nombreux possibles, elles laissent ouverte la question des usages en contexte réel. Les tendances futures dépendront des choix qui seront faits en termes de division du travail et de compétences.

Les configurations dites coexistence et activité simultanée ont finalement peu d’implications sur l’évolution des compétences ou de modalités de collaboration entre ingénieurs, techniciens et opérateurs. À l’inverse, le choix des configurations flexible ou activité alternée suppose que les opérateurs développent de nouvelles compétences, notamment en programmation, et que de nouvelles formes de collaboration verticales se développent.

En d’autres termes, les cobots redistribuent moins les cartes en matière de collaboration homme-machine qu’ils n’invitent à revoir les logiques de collaborations entre humains au sein des organisations.

The Conversation

Thierry Colin a bénéficié d’une aide de l’Initiative d’Excellence Lorraine (LUE) au titre du programme France 2030, portant la référence ANR-15-IDEX-04-LUE. Il a aussi bénéficié d’une aide de l’ANACT dans le cadre de son AMI “Prospective pour accompagner la transition des systèmes de travail”

Benoît Grasser a bénéficié d’une aide de l’Initiative d’Excellence Lorraine (LUE) au titre du programme France 2030, portant la référence ANR-15-IDEX-04-LUE. Il a aussi bénéficié d’une aide de l’ANACT dans le cadre de son AMI « Prospective pour accompagner la transition des systèmes de travail ».

ref. Les robots collaboratifs, des collègues de travail comme les autres ? – https://theconversation.com/les-robots-collaboratifs-des-collegues-de-travail-comme-les-autres-260231

The War of the Bucket: What one medieval battle tells us about history and myth

Source: The Conversation – Canada – By Kenneth Bartlett, Professor, Department of History, University of Toronto

A depiction of the War of the Bucket with victorious Modenese troops toting the bucket taken from the rival city of Bologna. (Museum of the History of Bologna)

Se non è vero, è ben trovato (even if it isn’t true, it makes a good story). This traditional Italian observation reflects a good deal of human history.

One such colourful event was the 14th-century War of the Bucket between the Italian cities of Bologna and Modena. The story is that after years of tension, a group of Modenese entered Bologna and stole the bucket from the town well.

The Bolognese demanded its return, but the ruler of Modena refused, and war ensued, culminating in the Modenese victory at the Battle of Zappolino in 1325.

It is an engaging story, but is it fact?

The reality is that the two cities were on either side of an ideological division that characterized the northern Italian states from the early 12th century. At the root of the conflict was a struggle for power and authority over Europe that pitted the Holy Roman Empire against the papacy.

The Guelphs and Ghibellines

a medieval era painting of two sets of men facing each other on a city street brandishing swords and pointing guns at each other.
Depiction of a 14th-century fight between Guelph and Ghibelline factions in Bologna, from the ‘Croniche di Luccha’ by Italian author Giovanni Sercambi.
(Giovanni Sercambi)

After the collapse of the Western Roman Empire in the fifth century, Italy was a mosaic of small states trying to defend their territory while attempting to expand at the expense of their neighbours.

Rulers of city states sought alliances with powers who could defend and legitimize their rule. But who had the power to grant the right to rule in these often unstable, violent times?

One claimant was the Holy Roman Emperor, who claimed the authority of the ancient Roman Empire after the coronation of Charlemagne in St. Peter’s Basilica in 800 CE.

The other was the pope, who claimed universal dominion over Christendom as the heir of St. Peter, Christ’s vicar on Earth and the legal recipient of Roman imperial authority.

The papacy’s legal claim was based on one of history’s greatest forgeries: the Donation of Constantine. This was purported to be a document that Constantine I, the first Christian emperor of Rome, issued to Pope Sylvester I before the emperor moved his capital to Constantinople (present-day Istanbul) in 330 CE. It granted full imperial authority to the pope in gratitude for curing the emperor of leprosy and his role in leading a Latin Christian empire in the West.

Although there is no evidence of the donation existing before the eighth century, it was widely accepted. It was not proven to be a forgery until the mid-15th century when Italian scholar and priest Lorenzo Valla revealed it to be fraudulent through textual analysis. Nevertheless, it was still referenced well into the 16th century, including in the Sala di Costantino (Hall of Constantine) at the Apostolic Palace in the Vatican.

Those who saw ultimate authority in the papacy were called Guelphs, an Italianization of the House of Welf, who thwarted claimants to the imperial throne. Those who supported the Holy Roman Emperors were called Ghibellines, another Italianization of a German word: Waiblingen, the name of the castle and the battle cry of the House of Hohenstaufen, the family that most seriously threatened the papacy in the 12th century.

This ideological division was not only an abstract reflection of divergent concepts of sovereignty. It was a practical division often determined by class, geography, events and opportunity. If your enemy was a Guelph, you were a Ghibelline; if a usurper overthrew a rival who was Ghibelline, he claimed to be Guelph, generating immediate support from within and outside the city.

a large renaissance fresco with many characters in a large room. On the left a seated man in papal cassock is handed a gold figurine by a kneeling man.
The ‘Donation of Constantine’ in the Apostolic Palace, painted by assistants of the Renaissance-era Italian painter Raphael between 1520-1524. The painting depicts a kneeling Emperor Constantine offering Pope Sylvester authority over the Western Roman Empire.
(Vatican Museums)

The War of the Bucket

This struggle between the Guelphs and Ghibellines was the real issue in the Bucket War. Bologna was a leading Guelph city, later forming part of the Papal States and guarding passes through the Apennine Mountains of Italy. Modena was a state that depended on support from the Holy Roman emperors, who had entered Italy and granted authority to their supporters.

As two cities on the edges of this divide, tension was inevitable, leading to the story of the purloined bucket. But the reality was much deeper and more dangerous.

A far more likely cause of the war was not the theft of a bucket but the capture of the Bolognese fortress of Monteveglio by Modena in September 1325, a serious threat to Bolognese defenses and a reason to seek redress.

A photo of an old wooden bucket with a metal handle
The stolen bucket on display at the Palazzo Comunale in Modena.
(Palazzo Comunale di Modena)

After years of border incursions, the capture of Monteveglio was the final straw. Two cities and their rival world views were in conflict, so every small victory was celebrated.

In November 1325, a greatly outnumbered Modenese army met the Bolognese at Zappolino. The pope had excommunicated the Modenese leader, declaring him a rebel against God.

The Bolognese had superior numbers but were largely untrained, whereas the Modenese had professional German soldiers sent by the emperor. The result was a decisive Modenese victory, with many Bolognese casualties.

Such victories often occasion popular mythologies, and the bucket story was one. It is far more likely that the bucket was taken after the battle, not before, and its symbolism was codified in the 17th century with the creation of a mock epic poem by the poet Alessandro Tassoni, La Secchia rapita.

To this day, many people continue to believe the story. In Modena, the original bucket is proudly displayed in the town hall, and a replica in the Ghirlandina Tower of the cathedral, where the original had been kept for centuries.

History and myth are often merely different narrative techniques, and both can be used to stimulate national pride and cohesion and to celebrate events that defined a people. This is the significance of the War of the Bucket, a real war with real causes now characterized by charming, if unlikely, actions of distant but not forgotten ancestors.

The Conversation

Kenneth Bartlett does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. The War of the Bucket: What one medieval battle tells us about history and myth – https://theconversation.com/the-war-of-the-bucket-what-one-medieval-battle-tells-us-about-history-and-myth-264751

Why journalists are reluctant to call Trump an authoritarian – and why that matters for democracy

Source: The Conversation – USA – By Karrin Vasby Anderson, Professor of Communication Studies, Colorado State University

A free election can still result in authoritarian rule. Photo illustration: Douglas Rissing, iStock/Getty Images Plus

In an authoritarian state, the leader engages in unconstitutional or undemocratic practices for the purpose of consolidating power.

Key components of authoritarianism include rejecting democratic rules; denying the legitimacy of opponents; tolerating or encouraging political violence; and curtailing the civil liberties of opponents.

Since he took office for a second time, President Donald Trump has sent National Guard troops to Los Angeles and Washington, named other cities run by Democrats as targets for military intervention, deployed masked and unidentifiable agents in immigration raids, explicitly threatened the city of Chicago with a military invasion and used government power to persecute his perceived political enemies.

But many journalistic outlets have yet to call him what he is – an authoritarian.

As a political communication scholar, I study how media framing shapes people’s understanding of the world.

Because authoritarianism is most visible in hindsight, people often don’t recognize it until it’s too late. Erica Chenoweth, a Harvard political scientist, notes that when it comes to democratic backsliding, “there are no bright lines … people often find out the world they’re in after the fact.”

That’s why it’s particularly important for journalists to label authoritarians as such when the evidence warrants. In Trump’s case, I believe the U.S. is well past that point.

A group of armed soldiers walk in front of a building on which hangs a large banner of Donald Trump.
Armed National Guard soldiers patrol near the Labor Department in Washington, where a banner of President Donald Trump is displayed, Aug. 26, 2025.
AP Photo/J. Scott Applewhite

Trump’s authoritarianism

Scholars with expertise in authoritarianism have been sounding the alarm about Trump for years.

Steven Levitsky and Daniel Ziblatt’s book “How Democracies Die” describes how, during the 2016 campaign and his first presidential term, Trump exhibited the key indicators of authoritarian behavior. He undermined the legitimacy of elections Republicans lost, baselessly described his rivals as criminals, refused to unambiguously condemn violence committed by his supporters, and threatened to punish critics and members of the media.

Levitsky and Ziblatt argue that “no other major presidential candidate in modern U.S. history, including Richard Nixon, has demonstrated such a weak public commitment to constitutional rights and democratic norms.”

That intensified when Trump returned to office in 2025.

Levitsky and Lucan A. Way documented Trump’s “path to American authoritarianism” for the journal Foreign Affairs in early 2025. In March, Levitsky told New York magazine that things were going worse than even he expected, asserting, “We’re pretty screwed.”

Levitsky is not alone in that view. In a February 2025 survey of political scientists conducted by Bright Line Watch – an academic organization that researches democratic health – the percentage of scholars plummeted who said that the U.S. “mostly or fully” meets the standard for democratic health.

That was before Trump, via social media, promised to go to war in Chicago. When asked about his post, Trump said, “We’re not going to war. We’re going to clean up our cities,” but he did not back away from the intent to deploy troops against the wishes of Illinois Governor JB Pritzker.

Pritzker responded to Trump’s post by noting, “This is not a joke. This is not normal.”

On Sept. 7, 2025, New York Times opinion columnist Ezra Klein itemized some of Trump’s authoritarian actions, concluding, “This is not just how authoritarianism happens. This is authoritarianism happening.”

A social media post with President Trump dressed as a character from Apocalypse Now.
President Donald Trump’s Sept. 7 post threatening the city of Chicago with federal intervention.
Truth Social Donald Trump account

What journalists have been saying

Although other opinion journalists like Jamelle Bouie, M. Gessen, Jonathan Chait and nearly every MSNBC anchor have been labeling Trump an authoritarian for some time, much hard news coverage of the Trump administration has not.

When Trump deployed troops to Washington, The Atlantic’s Quinta Jurecic dismissed it as “farcical” and “not a likely prelude to full authoritarian takeover.”

A CNN analysis similarly minimized the action as a “gambit,” a “distraction” and a “neat political trick.” CNN characterized concerns about authoritarianism as “hyperbolic warnings of looming tyranny that circulate all day on liberal media programs — whatever Trump does” and asserted that such reports “don’t really help voters understand what is going on.”

The New York Times’ Aug. 3 story by Peter Baker on Trump’s “tendency to suppress facts he doesn’t like and promote his own version of reality” bore a headline that read “Trump’s Efforts to Control Information Echo an Authoritarian Playbook,” suggesting that his actions were authoritarian without applying the label to Trump directly.

During the April 14, 2025, broadcast of CNN News Central, anchor Jessica Dean spoke with Nikolas Bowie, a Harvard Law School professor participating in a lawsuit against the Trump administration.

Bowie repeatedly called Trump an authoritarian for illegally freezing federal research funding awarded to Harvard.

When Dean noted that the “Trump administration says it’s doing all of this in an effort to combat antisemitism on campus,” Bowie responded that “antisemitism is really just a pretext for what is really an authoritarian attack on higher education.” Federal Judge Allison Burroughs later agreed with that interpretation in her ruling against the Trump administration.

Dean, however, sidestepped that interpretation, saying, “What I’m hearing is you think that enough was done to combat antisemitism, that this is about something else.”

A screenshot of a headline that reads 'Do Trump's D.C. moves echo an authoritarian playbook?'
The headline on a recent NPR story, echoing other journalism outlets’ use of the terms ‘echo’ and ‘authoritarian playbook.’
NPR

Competitive authoritarianism

There are reasons why journalistic outlets may hesitate to identify the “something else” as authoritarianism, or portray it as a looming threat rather than a current danger.

Trump’s propensity to sue journalists, and large media corporations’ decisions to settle even when the law was on their side, have likely made journalists and editors hesitant to describe Trump as an authoritarian.

And the imperative for balance sometimes results in a “both sides-ism” that misrepresents what authoritarianism actually looks like.

When California Gov. Gavin Newsom gave a speech asserting Trump’s military response to immigration protests in California was an assault on democracy, the New York Times covered it, quoting Newsom at length about the danger Trump presented. The article also quoted Republicans who alleged that Newsom’s public health directives during the COVID-19 pandemic made him “the ultimate authoritarian.”

But the particular nature of the authoritarianism the U.S is facing in the 21st century also plays a role.

Levitsky and Way have written about “competitive authoritarianism,” a new version of authoritarianism that doesn’t look like 20th-century fascism.

Many laypeople associate the word authoritarianism with military dictatorships and totalitarian rule. In competitive authoritarian regimes, however, there’s a constant push and pull between democratic and autocratic impulses. Levitsky and Way write that elections are held, but they may not be fair. The authoritarian regime uses power gained democratically to break democratic norms, undermine democratic institutions and tilt the playing field in its own favor.

Constraining free speech

Journalistic norms of independence can pressure even ethical journalists into acquiescence to competitive authoritarianism because they want to avoid looking partisan when all coverage that falls outside the authoritarian’s approved message gets characterized as resistance.

Paramount settled what one free speech advocate described as a “widely derided lawsuit brought by Donald Trump against ’60 Minutes,’” and CBS recently pledged to stop editing recorded interviews on “Face the Nation” after complaints lodged by Homeland Security Secretary Kristi Noem.

The Paramount and CBS cases suggest that, left unchallenged, a competitive authoritarian leader will use their leverage to influence what should be independent journalism.

Words matter. And how a democratic society responds to its leaders can make the difference between a free society and one in which a leader increasingly suppresses the voices, rights and will of the governed.

The Conversation

Karrin Vasby Anderson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why journalists are reluctant to call Trump an authoritarian – and why that matters for democracy – https://theconversation.com/why-journalists-are-reluctant-to-call-trump-an-authoritarian-and-why-that-matters-for-democracy-263778

Bail reforms across the US have shown that releasing people pretrial doesn’t harm public safety

Source: The Conversation – USA – By Henry F. Fradella, Professor of Criminology and Criminal Justice, Arizona State University

Nine of every 10 detained defendants in the U.S. remain in jail awaiting trial because they cannot pay bail money. AP Photo/Rich Pedroncelli, File

President Donald Trump recently signed two executive orders targeting “cashless bail,” the policies that permit the release of people arrested for crimes pending trial without requiring them to pay money.

One executive order directs arrestees in Washington, D.C. to be “held in Federal custody to the fullest extent permissible under applicable law.” The other order calls for the withholding of federal funds to states that “substantially eliminated cash bail as a potential condition of pretrial release from custody” for many offenses.

Cashless bail does not mean that everyone is simply released unconditionally to await trial. Instead, judges have the ability to detain people who pose a specific threat to another person or the community. And they can impose conditions on those who are released, including stringent measures like electronic monitoring.

Trump has criticized cashless bail policies for threatening public safety because they can release dangerous people from detention.

As legal and criminal justice scholars, we have studied bail reforms across the United States.

We have found that jurisdictions that reduce reliance on cash bail can maintain public safety. And they can also curtail mass pretrial incarceration that overwhelmingly locks up people who are too poor to afford bail. That includes three jurisdictions that we examine in detail below: Washington, D.C., New Jersey and Illinois.

The rise of bail bonds

Bail is a promise by an accused person to show up at court hearings in exchange for being released from custody pending the resolution of criminal charges.

In many U.S. jurisdictions, however, people pledge money or property as collateral for their pretrial release. But some people are released unconditionally, referred to as release on their own recognizance. Others are denied pretrial release altogether because they pose a risk of flight, a risk of failing to appear, or pose a danger to the community.

Historically, the bail system worked on promises and one’s reputation, not money. Money bail became more common around the turn of the 20th century with the rise of commercial bail bonds, in which a bail bond business would front the bail money, charging the arrestee a portion of the bail amount as a fee.

This created a system in which people with money could buy their pretrial freedom for many crimes – even serious felonies. Conversely, between 60% and 90% of people remained jailed despite the availability of bail bonds. This was not because they were dangerous, but because they lacked the financial resources to come up with the 10% of their bail amount to purchase their pretrial freedom.

The problems with cash bail

On any given day, approximately 664,000 people are locked up in jails in the United States. Only about 30% of these people are serving sentences following criminal convictions. The remaining 70% in jail are awaiting trial.

Typically, this is not because a court has judged them a risk to public safety. And usually it’s not because a judge decided they are unlikely to appear at scheduled court hearings.

Instead, they remain in jail because they cannot pay the money bail that has been set in their cases. This can have serious or even tragic consequences, such as lost homes and jobs, and even suicides. Indeed, suicide is the leading cause of death in jails, and pretrial detainees are six times more likely to die by suicide than those serving jail time after convictions.

A fluorescent sign reads 'bail bonds.'
In 2017, the jailing of people who could not afford bail cost U.S. taxpayers $38 million daily.
AP Photo/Kathy Willens, File

A 2012 study in New York City found that “even when bail is set comparatively low – at $500 or less, as it is in one-third of nonfelony cases – only 15% of defendants are able to come up with the money to avoid jail.” In 2017, the jailing of people who could not afford bail cost taxpayers US$38 million each day – an amount that exceeds $50 million today, adjusted for inflation.

And it has allowed commercial bail businesses – and the nine insurance companies that back the roughly 30 corporations that underwrite more than $14 billion in bail bonds issued each year – to earn profits in excess of $2.4 billion annually.

Conversely, money bail systems allow people with financial means, even those who might be dangerous or pose a genuine risk of flight, to be released because they can afford to post bail.

Bail reform in Washington

Washington abolished cash bail in the early 1990s. The city replaced it with a system that overwhelmingly pairs pretrial release with levels of supervision tied to the risk that a court determines a defendant might pose. As a result, roughly 87% of all people arrested in Washington are released pending trial without needing to pay or pledge any money.

Despite the lack of money bail, the city has experienced high court appearance rates and low reoffending rates. Between 2019 and 2024, 89% of defendants awaiting trial in the city showed up to their scheduled court appearances – and 90% remained arrest-free. Even among those accused of violent offenses, 98% were not rearrested for violent crimes while on pretrial release.

Washington shows that when people are given the tools and reminders they need, they are overwhelmingly likely to comply with court obligations. That includes phone calls, text messages and email reminders about court dates or access to pretrial services. Moreover, these results illustrate that alternatives to cash bail can function effectively, without compromising public safety.

The Illinois and New Jersey experiences

New Jersey overhauled its bail system
in 2017 by virtually eliminating cash bail. The state replaced it with a framework that relies on judicial assessments and pretrial monitoring to decide whether defendants should be detained or released.

Within two years of New Jersey’s bail reforms, the state’s pretrial jail population decreased by roughly 44%. Most notably, the state did this by reducing the number of defendants held in jail for more than a day or two.

This reduction was not accompanied by an increase in failures to appear in court or in new criminal charges.

A recent Drexel University–Boston University study echoed those findings, confirming that the decline in incarceration came without increases in gun violence. The study also found that the number of people held on low bail amounts – $2,500 or less – fell sharply, from more than 12% of the jail population in 2012 to just 0.4% by 2021.

Early data analyses after Illinois eliminated cash bail in September 2023 show that jail populations declined with no uptick in failure-to-appear rates.

Further, violent crime and property crime rates in Cook County have decreased since the law took effect, including a 15% reduction in Chicago.

Broader considerations

In 2024, the Brennan Center for Justice, a public policy institute, analyzed data from 33 cities, comparing 22 that had enacted bail reforms with 11 that had not. The researchers found that there was no relationship between bail reform and crime rates. When combined with the data from Washington, New Jersey and Illinois, it seems clear that jurisdictions can protect public safety while also reducing unnecessary and harmful pretrial detention.

In New Jersey, for example, thousands of people – many from communities of color – were able to remain employed and housed while awaiting trial. Rather than destabilizing people’s lives by unnecessary incarceration, the state contributed to greater stability for them, their families and communities.

The question moving forward is how to build on these successes.

As policymakers consider next steps, these empirically supported results can provide guidance. They provide evidence that cashless bail is not a threat but an opportunity for fairer, smarter justice.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Bail reforms across the US have shown that releasing people pretrial doesn’t harm public safety – https://theconversation.com/bail-reforms-across-the-us-have-shown-that-releasing-people-pretrial-doesnt-harm-public-safety-264448

Social media is teaching children how to use AI. How can teachers keep up?

Source: The Conversation – Canada – By Johanathan Woodworth, Assistant Professor, Education, Mount Saint Vincent University

Artificial intelligence (AI) is reshaping how students write essays, practise languages and complete assignments. Teachers are also experimenting with AI for lesson planning, grading and feedback. The pace is so fast that schools, universities and policymakers are struggling to keep up.

What often gets overlooked in this rush is a basic question: how are students and teachers actually learning to use AI?




Read more:
AI in schools — here’s what we need to consider


Right now, most of this learning happens informally. Students trade advice on TikTok or Discord, or even ask ChatGPT for instructions. Teachers swap tips in staff rooms or glean information from LinkedIn discussions.

These networks spread knowledge quickly but unevenly, and they rarely encourage reflection on deeper issues such as bias, surveillance or equity. That is where formal teacher education could make a difference.

Vox looks at how AI is impacting education.

Beyond curiosity

Research shows that educators are under-prepared for AI. A recent study found many lack skills to assess the reliability and ethics of AI tools. Professional development often stops at technical training and neglects wider implications. Meanwhile, uncritical use of AI risks amplifying bias and inequity.

In response, I designed a professional development module within a graduate-level course at Mount Saint Vincent University. Teacher candidates engaged in:

  • Hands-on exploration of AI for feedback and plagiarism detection;
  • Collaborative design of assessments that integrated AI tools;
  • Case analysis of ethical dilemmas in multilingual classrooms.

The goal was not simply to learn how to use AI, but to move from casual experimentation to critical engagement.

Critical thinking for future teachers

During the sessions, patterns quickly emerged. Teacher candidates were enthusiastic about AI to begin with, and remained so. Participants reported a stronger ability to evaluate tools, recognize bias and apply AI thoughtfully.

I also noticed that the language around AI shifted. Initially, teacher candidates were unsure about where to start, but by the end of the sessions, they were confidently using terms like “algorithmic bias” and “informed consent” with confidence.

Teacher candidates increasingly framed AI literacy as professional judgment, connected to pedagogy, cultural responsiveness and their own teacher identity. They saw literacy not only as understanding algorithms but also as making ethical classroom decisions.

The pilot suggests enthusiasm is not the missing ingredient. Structured education gave teacher candidates the tools and vocabulary to think critically about AI.

Inconsistent approaches

These classroom findings mirror broader institutional challenges. Universities worldwide have adopted fragmented policies: some ban AI, others cautiously endorse it and many remain vague. This inconsistency leads to confusion and mistrust.

Alongside my colleague Emily Ballantyne, we examined how AI policy frameworks can be adapted for Canadian higher education. Faculty recognized AI’s potential but voiced concerns about equity, academic integrity and workload.

We proposed a model that introduced a “relational and affective” dimension, emphasizing that AI affects trust and the dynamics of teaching relationships, not only efficiency. In practice, this means that AI not only changes how assignments are completed, but also reshapes the ways students and instructors relate to one another in class and beyond.

Put differently, integrating AI in classrooms reshapes how students and teachers relate, and how educators perceive their own professional roles.

When institutions avoid setting clear policies, individual instructors are left to act as ad hoc ethicists without institutional backing.

Embedding AI literacy

Clear policies alone are not enough. For AI to genuinely support teaching and learning, institutions must also invest in building the knowledge and habits that sustain critical use. Policy frameworks provide direction, but their value depends on how they shape daily practice in classrooms.

  1. Teacher education must lead on AI literacy. If AI reshapes reading, writing and assessment, it cannot remain an optional workshop. Programs must integrate AI literacy into curricula and outcomes.

  2. Policies must be clear and practical. Teacher candidates repeatedly asked: “What does the university expect?” Institutions should distinguish between misuse (ghostwriting) and valid uses (feedback support), as recent research recommends.

  3. Learning communities matter. AI knowledge is not mastered once and forgotten; it evolves as tools and norms change. Faculty circles, curated repositories and interdisciplinary hubs can help teachers share strategies and debate ethical dilemmas.

  4. Equity must be central. AI tools embed biases from their training data and often disadvantage multilingual learners. Institutions should conduct equity audits and align adoption with accessibility standards.

Supporting students and teachers

Public debates about AI in classrooms often swing between two extremes: excitement about innovation or fear of cheating. Neither captures the complexity of how students and teachers are actually learning AI.

Informal learning networks are powerful but incomplete. They spread quick tips, but rarely cultivate ethical reasoning. Formal teacher education can step in to guide, deepen and equalize these skills.

When teachers gain structured opportunities to explore AI, they shift from passive adopters to active shapers of technology. This shift matters because it ensures educators are not merely responding to technological change, but actively directing how AI is used to support equity, pedagogy and student learning.

That is the kind of agency education systems must nurture if AI is to serve, rather than undermine, learning.

The Conversation

Johanathan Woodworth does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Social media is teaching children how to use AI. How can teachers keep up? – https://theconversation.com/social-media-is-teaching-children-how-to-use-ai-how-can-teachers-keep-up-264727

Where does your glass come from?

Source: The Conversation – USA (2) – By Aki Ishida, Professor and Director, College of Architecture and Graduate School of Architecture and Urban Design, Washington University in St. Louis

Visitors get the sensation of floating above Manhattan at the Summit at One Vanderbilt. These rooms are built with low-iron glass, made with ultrapure silica sand. Benno Schwinghammer/picture alliance via Getty Images

The word “local” has become synonymous with sustainability, whether it’s food, clothes or the materials used to construct buildings. But while consumers can probably go to a local lumberyard to buy lumber from sustainably grown trees cut at nearby sawmills, no one asks for local glass.

If they did, it would be hard to give an answer.

The raw materials that go into glass – silica sand, soda ash and limestone – are natural, but the sources of those materials are rarely known to the buyer.

The process by which sand becomes sheets of glass is often far from transparent. The sand, which makes up over 70% of glass, could come from a faraway riverbed, lakeshore or inland limestone outcrop. Sand with at least 95% silica content is called silica sand, and only the purest is suitable for architectural glass production. Such sand is found in limited areas.

Rock formations stick up from sandy ground next to a lake
Klondike Park, outside St. Louis, was once a mine for St. Peter sandstone, used in glass production. This is one of the few U.S. locations with 99% pure silica.
Aki Ishida

If the glass is colorless, its potential sources are even more limited, because colorless low-iron glass – popularized by Apple’s flagship stores and luxury towers around the world – requires 99% pure silica sand.

Glass production in Venice

The mysteries of glass production have historic precedent that can be traced back to trade secrets of the Venetian Empire.

Venice, particularly the island of Murano, became the center for glass production largely due to its strategic location for importing raw materials and production know-how and exporting coveted glass objects.

From the 11th to the 16th centuries, the secrets of glassmaking were protected by the Venetians until three glassmakers were smuggled out by King Louis XIV of France, who applied the technology to create the Palace of Versailles’ Hall of Mirrors.

A large hall lined with mirrors, with a painted ceiling, statutes and large chandeliers.
The Palace of Versailles’ famed Hall of Mirrors was made by glass artisans trained by the Venetians.
Myrabella/Wikimedia Commons, CC BY-SA

Venice was an otherwise unlikely location for glassmaking.

Neither the primary materials of sand and soda ash (sodium carbonate) nor the firewood for the medieval Venetian glassmakers were found in the city’s immediate vicinity. They were transported from the riverbeds of the Ticino River in Switzerland and the Agide River, which flows from the Austria-Switzerland border to the Adriatic Sea south of Venice. Soda ash, which is needed to lower the melting point of silica sand, was brought from Syria and Egypt.

So Venetian glass production was not local; it was dependent on precious resources imported from afar on ships.

An engraving of people working on glass factory, with a large furnace in the center
Glassmaking has been a labor- and fuel-intensive process. This engraving from 1877 shows the production of glass cylinders, which are cut and unrolled to make glass sheets.
L’Illustrazione Italiana, No 51/De Agostini via Getty Images

Rising demand for low-iron, seamless glass

In the past few decades, low-iron glass, known for its colorlessness, has become the contemporary symbol of high-end architecture. The glass appears to disappear.

Low-iron glass is made from ultrapure sand that is low in iron oxide. Iron causes the green tint seen in ordinary glass. In architecture, low-iron glass doesn’t affect the performance – only the appearance. But it is prized.

Two men wearing gloves roll large sheets of clear glass, taller than themselves, on a cart.
Most glass has a greenish tint, caused by iron oxide in the sand. Low-iron glass is more clear, but the ingredients come from exclusive sand mines, which can mean more transportation emissions, particularly for large panels produced in a limited number of factories.
Bluecinema/E+ via Getty Images

In the U.S., this type of sand is found in a few locations, primarily in Minnesota, Wisconsin, Illinois and Missouri, where sand as white and fine as sugar – thus called saccharoidal – is mined from St. Peter sandstone. Other locations where it can be found around the world include Queensland in Australia and parts of China. Less pure sand can be purified by methods such as acid washing or magnetic separation.

Perhaps no corporation has popularized low-iron and seamless glass in architecture more than the technology giant Apple.

Glass has become fundamentally linked with Apple’s products and architecture, including its flagship stores’ expensive and daring experiments in architectural uses of glass.

Apple’s first showroom, completed in Soho in New York in 2002, showcased all-glass stairs that were strengthened with hurricane- and bullet-resistant plastic interlayers sandwiched between five sheets of glass. The treads attach to all glass walls with a hockey puck-size titanium hardware, making both the glass stairs and the shoppers appear to float.

A large glass cube lit up at night with glowing Apple logos on the sides and stairs leading down to the store below.
Apple’s New York flagship store, dubbed the Cube, was built in 2006 with 90 panels of low-iron glass, then rebuilt in 2011 with 15 panels.
Ben Hider/Getty Images

The company’s iconic flagship store near New York’s Central Park is an all-glass cube measuring 32½ feet (10 meters) on each side and serving as a vestibule to the store below. The first version was completed in 2006 using 90 panels, which was a technical feat. Then, in 2011, Apple reconstructed the cube in the same location, same size, but with only 15 panels, minimizing the number of seams and hardware while maximizing transparency.

Today, low-iron glass has become the standard for high-profile architecture and those who can afford it, including the “pencil towers” in Manhattan’s Billionaires’ Row.

A view of part of the NYC skyline across Central Park, with several skinny towers sticking up on their own.
New high-rises like the supertall towers in New York’s Billionaire’s Row are largely clad floor to ceiling in glass.
Aerial_Views/E+ via Getty Images

Glass’s climate impact

Glass walls common in high-rise buildings today have other drawbacks. They help to heat up the room during increasingly hot summers and contribute to heat loss in winter, increasing dependence on artificial cooling and heating.

The glassmaking process is energy intensive and relies on nonrenewable resources.

To bring sand to its molten state, the furnace must be heated to over 2,700 degrees Fahrenheit (1,500 degrees Celisus) for as long as 50 hours, which requires burning fossil fuels such as natural gas, releasing greenhouse gases. Once heated to that temperature, the furnace runs 24/7 and is rarely shut down.

Glass manufacturer Pilkington shows how glass is made.

The soda ash and limestone also release carbon dioxide during melting. Moreover, glass production requires mining or producing nonrenewable natural resources such as sand, soda ash, lime and fuel. Transporting them further increases emissions.

Production and fabrication of extra-large glass panels rely on specialized equipment and occur only at a limited number of plants in the world, meaning transportation increases the carbon footprint.

Architectural glass is also difficult to recycle, largely due to the labor involved in separating glass from the building assembly.

Although glass is touted as infinitely recyclable, only 6% of architectural glass is downcycled into glass products that require less purity and precision, and almost none is recycled into architectural glass. The rest ends up in landfills.

The increasing demand for glass that is colorless, extra large and seamless contributes to glass’s sustainability problem.

Sand pours through a person's fingers
This 99% pure silica, a sugarlike sand, comes from a St. Peter sandstone mine once used for glassmaking. It’s now Klondike Park in St. Charles County, Mo.
Aki Ishida

How can we make glass more sustainable?

There are ways to reduce glass’s environmental footprint.

Researchers and companies are working on new types of glass that could lower its climate impact, such as using materials that lower the amount of heat necessary to make glass. Replacing natural gas, typically used in glassmaking, with less-polluting power sources can also reduce emissions.

Low-e coatings, a thin coat of silver sprayed onto a glass surface, can help reduce the amount of heat that reaches a building’s interior by reflecting both the visible light and heat, but the coating can’t fully eliminate solar heat gain.

People can also alter their standards and accept smaller and less ultraclear panels. Think of the green tint not as impure but natural.

The Conversation

Aki Ishida does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Where does your glass come from? – https://theconversation.com/where-does-your-glass-come-from-263421

How does AI affect how we learn? A cognitive psychologist explains why you learn when the work is hard

Source: The Conversation – USA (2) – By Brian W. Stone, Associate Professor of Cognitive Psychology, Boise State University

When OpenAI released “study mode” in July 2025, the company touted ChatGPT’s educational benefits. “When ChatGPT is prompted to teach or tutor, it can significantly improve academic performance,” the company’s vice president of education told reporters at the product’s launch. But any dedicated teacher would be right to wonder: Is this just marketing, or does scholarly research really support such claims?

While generative AI tools are moving into classrooms at lightning speed, robust research on the question at hand hasn’t moved nearly as fast. Some early studies have shown benefits for certain groups such as computer programming students and English language learners. And there have been a number of other optimistic studies on AI in education, such as one published in the journal Nature in May 2025 suggesting that chatbots may aid learning and higher-order thinking. But scholars in the field have pointed to significant methodological weaknesses in many of these research papers.

Other studies have painted a grimmer picture, suggesting that AI may impair performance or cognitive abilities such as critical thinking skills. One paper showed that the more a student used ChatGPT while learning, the worse they did later on similar tasks when ChatGPT wasn’t available.

In other words, early research is only beginning to scratch the surface of how this technology will truly affect learning and cognition in the long run. Where else can we look for clues? As a cognitive psychologist who has studied how college students are using AI, I have found that my field offers valuable guidance for identifying when AI can be a brain booster and when it risks becoming a brain drain.

Skill comes from effort

Cognitive psychologists have argued that our thoughts and decisions are the result of two processing modes, commonly denoted as System 1 and System 2.

The former is a system of pattern matching, intuition and habit. It is fast and automatic, requiring little conscious attention or cognitive effort. Many of our routine daily activities – getting dressed, making coffee and riding a bike to work or school – fall into this category. System 2, on the other hand, is generally slow and deliberate, requiring more conscious attention and sometimes painful cognitive effort, but often yields more robust outputs.

We need both of these systems, but gaining knowledge and mastering new skills depend heavily on System 2. Struggle, friction and mental effort are crucial to the cognitive work of learning, remembering and strengthening connections in the brain. Every time a confident cyclist gets on a bike, they rely on the hard-won pattern recognition in their System 1 that they previously built up through many hours of effortful System 2 work spent learning to ride. You don’t get mastery and you can’t chunk information efficiently for higher-level processing without first putting in the cognitive effort and strain.

I tell my students the brain is a lot like a muscle: It takes genuine hard work to see gains. Without challenging that muscle, it won’t grow bigger.

What if a machine does the work for you?

Now imagine a robot that accompanies you to the gym and lifts the weights for you, no strain needed on your part. Before long, your own muscles will have atrophied and you’ll become reliant on the robot at home even for simple tasks like moving a heavy box.

AI, used poorly – to complete a quiz or write an essay, say – lets students bypass the very thing they need to develop knowledge and skills. It takes away the mental workout.

Using technology to effectively offload cognitive workouts can have a detrimental effect on learning and memory and can cause people to misread their own understanding or abilities, leading to what psychologists call metacognitive errors. Research has shown that habitually offloading car navigation to GPS may impair spatial memory and that using an external source like Google to answer questions makes people overconfident in their own personal knowledge and memory.

Girl doing school with phone and notebook.
Learning and mastery come from effort, whether that’s done with a powerful chatbot or AI tutor or not, but educators and students need to resist outsourcing that work.
Francesco Carta fotografo via Getty Images

Are there similar risks when students hand off cognitive tasks to AI? One study found that students researching a topic using ChatGPT instead of a traditional web search had lower cognitive load during the task – they didn’t have to think as hard – and produced worse reasoning about the topic they had researched. Surface-level use of AI may mean less cognitive burden in the moment, but this is akin to letting a robot do your gym workout for you. It ultimately leads to poorer thinking skills.

In another study, students using AI to revise their essays scored higher than those revising without AI, often by simply copying and pasting sentences from ChatGPT. But these students showed no more actual knowledge gain or knowledge transfer than their peers who worked without it. The AI group also engaged in fewer rigorous System 2 thinking processes. The authors warn that such “metacognitive laziness” may prompt short-term performance improvements but also lead to the stagnation of long-term skills.

Offloading can be useful once foundations are in place. But those foundations can’t be formed unless your brain does the initial work necessary to encode, connect and understand the issues you’re trying to master.

Using AI to support learning

Returning to the gym metaphor, it may be useful for students to think of AI as a personal trainer who can keep them on task by tracking and scaffolding learning and pushing them to work harder. AI has great potential as a scalable learning tool, an individualized tutor with a vast knowledge base that never sleeps.

AI technology companies are seeking to design just that: the ultimate tutor. In addition to OpenAI’s entry into education, in April 2025 Anthropic released its learning mode for Claude. These models are supposed to engage in Socratic dialogue, to pose questions and provide hints, rather than just giving the answers.

Early research indicates AI tutors can be beneficial but introduce problems as well. For example, one study found high school students reviewing math with ChatGPT performed worse than students who didn’t use AI. Some students used the base version and others a customized tutor version that gave hints without revealing answers. When students took an exam later without AI access, those who’d used base ChatGPT did much worse than a group who’d studied without AI, yet they didn’t realize their performance was worse. Those who’d studied with the tutor bot did no better than students who’d reviewed without AI, but they mistakenly thought they had done better. So AI didn’t help, and it introduced metacognitive errors.

Even as tutor modes are refined and improved, students have to actively select that mode and, for now, also have to play along, deftly providing context and guiding the chatbot away from worthless, low-level questions or sycophancy.

The latter issues may be fixed with better design, system prompts and custom interfaces. But the temptation of using default-mode AI to avoid hard work will continue to be a more fundamental and classic problem of teaching, course design and motivating students to avoid shortcuts that undermine their cognitive workout.

As with other complex technologies such as smartphones, the internet or even writing itself, it will take more time for researchers to fully understand the true range of AI’s effects on cognition and learning. In the end, the picture will likely be a nuanced one that depends heavily on context and use case.

But what we know about learning tells us that deep knowledge and mastery of a skill will always require a genuine cognitive workout – with or without AI.

The Conversation

Brian W. Stone does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. How does AI affect how we learn? A cognitive psychologist explains why you learn when the work is hard – https://theconversation.com/how-does-ai-affect-how-we-learn-a-cognitive-psychologist-explains-why-you-learn-when-the-work-is-hard-262863

40 years ago, the first AIDS movies forced Americans to confront a disease they didn’t want to see

Source: The Conversation – USA (2) – By Scott Malia, Associate Professor of Theatre, College of the Holy Cross

‘Buddies,’ which premiered on Sept. 17, 1985, cost just $27,000 to make. Vinegar Syndrome/Roe Bressan/Frameline Distribution

First it was referred to as a “mysterious illness.” Later it was called “gay cancer,” “gay plague” and “GRID,” an acronym for gay-related immune deficiency. Most egregiously, some called it “4H disease” – shorthand for “homosexuals, heroin addicts, hemophiliacs and Haitians,” the populations most afflicted in the early days.

While these names were ultimately replaced by AIDS – and later, after the virus was identified, by HIV – they reflected two key realities about AIDS at the time: a lack of understanding about the disease and its strong association with gay men.

Although the first report in the mainstream press about AIDS appeared in 1981, the first movies to explore the disease wouldn’t come for four more years.

When the feature film “Buddies” and the television film “An Early Frost” premiered 40 years ago, in the fall of 1985, AIDS had belatedly been breaking into the public consciousness.

Earlier that year, the first off-Broadway plays about AIDS opened: “As Is” by William Hoffman and “The Normal Heart” by writer and activist Larry Kramer. That summer, actor Rock Hudson disclosed that he had AIDS, becoming the first major celebrity to do so. Hudson, who died in October 1985, was a friend of President Ronald Reagan and Nancy Reagan. Reagan, who had been noticeably silent on the subject of the disease, would go on to make his first – albeit brief – public remarks about AIDS in September 1985.

Five days before Reagan’s speech, “Buddies,” an independent film made for US$27,000 and shot in nine days, premiered at the Castro Theatre in San Francisco on Sept. 12, 1985.

A film on the front lines

If you haven’t heard of “Buddies,” that’s not surprising; the film mostly played art houses and festivals before disappearing.

Its filmmaker, Arthur J. Bressan Jr., was best known for his gay pornographic films, although he’d also made documentaries such as “Gay USA.” “Buddies” would go on to reach a wider audience thanks to a 2018 video release by Vinegar Syndrome, a distribution company that focuses on restoring cult cinema, exploitation films and other obscure titles.

It was inspired by the real-life buddies program at the Gay Men’s Health Crisis, an organization Kramer co-founded. At the time, many people dying of the disease had been rejected by family and friends, so a buddy might be the only person who visited a terminal AIDS patient.

The film feels like a play, in that most of the movie takes place in a single room and features just two characters: a naive young gay man named David and a young AIDS patient named Robert. Over the course of the film, the characters open up about their lives and their fears about the growing epidemic. It also includes a sex scene – something other early AIDS films completely avoided – in which David and Robert engage in safer sex.

AIDS packaged for the masses

The remarkably frank and intimate approach to the epidemic in “Buddies” contrasts sharply to the television film “An Early Frost,” which premiered on NBC on Nov. 11, 1985.

The film’s protagonist is a successful Chicago lawyer named Michael who hasn’t come out to his family, much to the distress of his long-term partner, Peter. When Michael finds out he has AIDS, he’s forced to come out to his parents, both as gay and as having AIDS.

Much of the film deals with Michael’s self-acceptance and his attempts to mend his relationships. Yet the production of “An Early Frost” was fraught with concerns about depicting both homosexuality and AIDS. Unlike David and Robert, Michael and Peter show no physical affection – they barely touch each other.

A promotional clip for ‘An Early Frost,’ which drew 34 million viewers when it premiered on NBC.

Knowledge of AIDS was still evolving – a test for HIV was approved in March 1985 – so screenwriters and life partners Daniel Lipman and Ron Cowen went through 13 revisions of the script. The real-life fears and misconceptions about how AIDS could and could not be transmitted were central to the storyline, adding extra pressure to be accurate in the face of evolving understanding of the virus.

Despite losing NBC $500,000 in advertisers, “An Early Frost” drew 34 million viewers and was showered with Emmy nominations the following year.

A quilt of stories emerges

“Buddies” and “An Early Frost” opened up AIDS and HIV as subject matters for film and television.

They begat two lanes of HIV storytelling that continue to this day.

The first is an approach geared to mainstream audiences that tends to avoid controversial issues such as sex or religion and instead focuses on characters who grapple with both the illness and the stigma of the virus.

The second is an indie approach that’s often more confrontational, irreverent and angry at the injustice and indifference AIDS patients faced.

The former approach is seen in 1993’s “Philadelphia,” which earned Tom Hanks his first Oscar. The critically and commercially successful film shares a number of story points with “An Early Frost”: Hanks’ character, a big-city lawyer, finds out he is HIV positive and must confront bias head-on. HIV also features prominently in later films such as “Precious” (2009) and “Dallas Buyers Club” (2013), both of which, like “Philadelphia,” became awards darlings.

The edgier, more critical approach can be seen in the New Queer Cinema movement of the 1990s, a film movement that developed as a response to the epidemic. Gregg Araki’s “The Living End” (1992) is a key film in the movment: It tells the story of two HIV-positive men who become pseudo-vigilantes in the wake of their diagnoses.

In ‘The Living End,’ the HIV-positive protagonists go on a hedonistic rampage to take out their anger at the world.

Somewhere in between is “Longtime Companion” (1990), which was the first film about AIDS to receive a wide release and tracks the impact of the epidemic on a fictional group of gay men throughout the 1980s. The film was written by gay playwright and screenwriter Craig Lucas and directed by Norman Rene, who died of AIDS six years after the film’s release.

Studios still leery

In many ways, television is where the real breakthroughs have happened and continue to happen.

The first television episode to deal with AIDS appeared on the medical drama “St. Elsewhere” in 1983; AIDS was also the subject of episodes in the sitcoms “Mr. Belvedere,” “The Golden Girls” and “Designing Women.” “Killing All the Right People” was the title of the latter’s special episode – a phrase the show’s writer and co-creator Linda Bloodworth-Thomason heard while her mother was being treated for AIDS.

More recently, producer Ryan Murphy has made a cottage industry of representations of queer people, particularly those with HIV. His stage revivals of “The Normal Heart” and Mart Crowley’s 1968 play “The Boys in the Band” were later adapted into films for television and streaming. He also produced “Pose,” a three-season series about drag ball culture in the 1980s that stars queer characters of color, several of whom are HIV positive.

Yet for all of these strides, representations of HIV in film are still hard to come by. In fact, out of the 256 films released by major distributors in 2024, the number of HIV-postive characters amounted to … zero.

Perhaps movie studios are less willing to risk even a character with HIV given the drop in movie theater attendance in the age of streaming.

If you think it’s an exaggeration to suggest that people might not want to be seen going to the theater to watch a film about characters with HIV, the results of a 2021 GLAAD survey may surprise you.

It found that the stigma around HIV is still very high, particularly for HIV-positive people working in schools and hospitals. One-third of respondents were unaware that medication is available to prevent the transmission of HIV. More than half didn’t know that HIV-positive people can reach undetectable status and not transmit the virus to others.

Another important finding from the survey: Only about half of the nonqueer respondents had seen a TV show or film about someone with HIV.

This reflects both the progress made since “Buddies” and “An Early Frost” and also why these films still matter today. They were released at a time when there was almost no cultural representation of HIV, and misinformation and disinformation were rampant. There have been so many advances, in both the treatment of HIV and its visibility in popular culture. That visibility still matters, because there’s still much more than can be done to end the stigma.

The Conversation

Scott Malia does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. 40 years ago, the first AIDS movies forced Americans to confront a disease they didn’t want to see – https://theconversation.com/40-years-ago-the-first-aids-movies-forced-americans-to-confront-a-disease-they-didnt-want-to-see-262421

Doctors are joining unions in a bid to improve working conditions and raise wages in a stressful health care system

Source: The Conversation – USA (3) – By Patrick Aguilar, Managing Director of Health, Washington University in St. Louis

Dr. Maryssa Miller speaks to fellow union members outside George Washington University Hospital in Washington, D.C., in 2024. Maansi Srivastava/The Washington Post via Getty Images

The share of doctors who belong to unions is rising quickly at a time when organized labor is losing ground with other professions. The Conversation U.S. asked Patrick Aguilar, a Washington University in St. Louis pulmonologist and management professor, to explain why the number of physicians joining unions is growing – a trend that appears likely to continue.

How long have there been health care unions?

U.S. nurses first joined labor unions in 1896. Today, about 1 in 5 registered nurses are union members, twice the rate of unionization in all professions.

The first physicians’ union formed in 1934, when hospital residents – doctors in training who tended then, as now, to be paid relatively little and forced to work long hours – organized to demand higher pay and shorter shifts. For the next eight decades, those unions grew slowly.

But the pace has picked up. The share of doctors who belong to unions rose from 5.7% in 2014 to 7.2% in 2019. By 2024, an estimated 8% of physicians were union members.

This swift growth contrasts with declining union membership overall. The share of American workers in unions fell by more than half, from 20.1% to 9.9%, between 1983 and 2024.

Residents and interns are particularly interested in joining unions. Nearly 2 in 3 have said they might want to join one. Membership in the Committee of Interns and Residents, a chapter of Service Employees International Union, rose by nearly 14% to 37,000 between late 2024 and early 2025. By September 2025, the union was saying that its ranks had grown to more than 40,000.

Several other U.S. unions also represent physicians. Doctors Council, which is also affiliated with Service Employees International Union, represents physicians, dentists, optometrists, podiatrists and veterinarians. The Union of American Physicians and Dentists, part of the American Federation of State, County and Municipal Employees, says it has at least 7,000 members.

Aren’t doctors too rich for labor organizing?

Just like labor unions that represent electricians or teachers, unions that represent doctors seek better working conditions, higher pay and better benefits for their members. While the typical U.S. doctor earns nearly US$240,000 a year, about four times what the typical American worker makes, their compensation varies widely depending on their medical specialty. A pediatric surgeon, for example, can earn twice as much as a pediatrician.

Despite their high wages, according to a poll of over 1,000 physicians, as many as 15% of physicians said they had cut back on their personal expenses, and 40% expected to delay retirement for financial reasons. The education and training required to become a doctor is lengthy and expensive, often leading to large amounts of student debt.

Additionally, many physicians are compensated for patient visits and not for work done outside of the exam room. The extra hours needed to document work, address patient concerns and maintain continuing education are often uncompensated, significantly reducing physicians’ effective hourly earnings.

Other unions advocate for higher wages and better conditions in well-compensated professions.

The National Football League Players Association is an example of a union with highly paid members that still advocates for their increased compensation. NFL players now earn a median salary of $860,000.

Baseball players earn even more. They have a median salary of $1.35 million, and all of the players are represented by the Major League Baseball Players Association, a union.

A medical worker looks dejected.
Many doctors are experiencing more stress due to relatively recent workplace changes.
Juanmoni/E+ via GettyImages

Why would doctors join unions?

An American Medical Association survey conducted in 1983 found that 75.8% of physicians were owners of their primary clinical practice. Four decades later, nearly 80% of physicians are employed by health care systems or other corporations.

As employees, physicians are now eligible to unionize and may have an interest in doing so to bargain with employers who set working conditions and compensation.

Residents and fellows, on the other hand, have been employees for much longer because of the structure of their training programs. Residents work longer hours, are paid significantly less and are obligated to complete their training programs in order to attain specialty certification.

These differences help explain the longer history of labor organizing for physician trainees.

Surveys point to several other possible causes besides concerns about employers.

An American Medical Association survey of 13,000 physicians, nurse practitioners and physician assistants in 2022 reflected rates of burnout exceeding 50% in several key specialties. More than half of those responding said they felt undervalued by their employer.

In 2023, the University of Michigan’s Center for Health and Research Transformation surveyed over 29,000 Michigan physicians. About 85% of them said administrative and regulatory requirements were a significant source of workplace stress.

The widespread adoption of electronic health records over the past 25 years, which has improved some aspects of medical diagnosis and treatment, has also given doctors more administrative responsibilities. Doctors spend nearly two additional hours updating electronic health records or doing related administrative tasks for every hour they spend with patients, according to one estimate.

Keeping the records up to date can contribute to burnout.

A doctor is flanked by computer screens and working on a laptop.
Many doctors say that they spend twice as much time dealing with electronic health records as they do with their own patients.
Ariel Skelley/DigitalVision via Getty Images

Are doctors worried about job security?

In recent years, nurse practitioners and physician assistants have taken on responsibilities previously reserved for doctors. Nurse practitioners or physician assistants saw patients for about 1 in 4 medical appointments, according to a 2023 study, up from around 1 in 5 a decade earlier.

Given significant differences in compensation between physicians and other kinds of health care providers, this trend raises concerns about the potential for health care employers to employ fewer doctors to save money on staff salaries.

Separately, there are growing concerns about the potential for the use of artificial intelligence and automation to replace some of the tasks that doctors do today.

Can labor organizing harm patients?

In April 2025, the American College of Physicians, which has 160,000 members, released a position paper with recommendations for responsible collective bargaining for doctors.

This group felt compelled to encourage the ethical engagement of its members in the midst of labor organizing because their work is often lifesaving and can be dangerous to disrupt due to strikes or other labor actions.

No study has empirically evaluated whether a doctor’s union membership affects their patients’ health. However, a 2022 meta-analysis of 17 studies found no significant impact on death rates when health care workers go on strike.

Despite the potential benefits, some doctors remain concerned that unionization may create divides among physicians, interfere with their ability in some cases to negotiate directly with their employers, and add layers of bureaucracy that don’t do patients or medical professionals any good.

Do doctors ever go on strike?

It’s historically been rare in the U.S., but that could be changing.

In January 2025, 70 doctors who belong to the Pacific Northwest Hospital Medicine Association joined thousands of nurses in a strike against Portland, Oregon-based Providence Health after more than a year of failed contract negotiations.

The strike lasted 27 days, delaying some elective procedures and making some emergency room wait times longer. Some patients had to go to other hospitals. The agreement the hospital ultimately reached with physicians boosted pay, expanded sick leave and included a commitment to change staffing models.

In June 2025, picket lines formed outside of four Minnesota health clinics for the first time in the state’s history. Members of the Doctors Council SEIU union were protesting after more than 18 months of failed negotiations for a new contract. The doctors, who all work for the Allina Health chain of hospitals, health clinics and urgent care sites, are seeking higher compensation, smaller workloads and more support staff.

Although no timeline has been announced, union members have authorized a strike if negotiations continue to fail. As of early September 2025, those negotiations were ongoing.

The Conversation

Patrick Aguilar does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Doctors are joining unions in a bid to improve working conditions and raise wages in a stressful health care system – https://theconversation.com/doctors-are-joining-unions-in-a-bid-to-improve-working-conditions-and-raise-wages-in-a-stressful-health-care-system-259232