How AI resurrects racist stereotypes and disinformation — and why fact-checking isn’t enough

Source: The Conversation – Canada – By Nadiya N. Ali, Assistant Professor, Sociology, Trent University

By any measure, 2025 is the year artificial intelligence (AI) rapidly shifted the way we work, interact with each other and engage with the world at large. It has also made undeniable the enduring reality of racism and the limits of fact-checking in an age of disinformation.

Thanks to algorithmic systems, narratives that tap into deep-seated fears and anxieties travel farther and faster than ever before. They circle the globe before fact-checkers can even flag a problematic post.

In the second half of the year, another technological disruption emerged with OpenAI’s Sora, a lifelike video-generation software. Nothing, seemingly, was immune, including politics.

Sora hit the political landscape with particular vigour during the longest federal government shutdown in United States history. The 43-day impasse generated significant pressure and public controversy, particularly around uncertainty and delays that could affect the Supplemental Nutrition Assistance Program (SNAP).

Digital blackface and the policing of Black poverty

At the height of the anxiety over the effects of the shutdown on SNAP benefits, which serves roughly 42 million Americans, a slew of short videos of Black women accosting social service employees or unleashing their frustration on livestream audiences caught the attention of the online sphere.

The SNAP suspension was ultimately blocked by the courts. It was also quickly revealed that the circulating clips were AI-generated.

What is most striking about these videos is how deliberately the caricature of the “Black welfare queen” was staged. In one video, the speaker declares, “I need SNAP to buy an iPhone.” In another, “I only eat steak, I need my funds.” And in a clip with children in the background, the woman insists, “I need to do my nails.”

Each expression of illicit use of funds is a shorthand for the alleged irresponsibility and moral failing that has long been intertwined with the racist trope of the “Black welfare queen.” One X user aptly dubbed these videos nothing short of “digital blackface.”

In the words of Black feminist writers Moya Bailey and Trudy, these videos traffic in “misogynoir” — a term developed to capture the “ways anti-Blackness and misogyny combine to malign Black women.” Bailey and Trudy note that representations of Black women as undeserving, burdensome to the public purse and inherently fraudulent are entrenched rather than exceptional.

Even clips “clearly labeled with a Sora watermark nabbed nearly 500,000 views on TikTok alone,” journalist Joe Wilkins observed. Wilkins goes on to explain that even when viewers were told the clips were AI-generated, some insisted, [“But that is what is happening.” Some argued that even if the videos were technically “fake,” they still “highlight genuine SNAP…issues.”

These comments expose the limits of fact-checking as an antidote to disinformation, especially when dealing with charged tropes. Once a harmful framing is revived and thrust into the collective ether, Ctrl+Alt+Delete becomes ineffective.

What requires attention, then, is not only how we grapple with the new terrain of AI-driven disinformation, but that we critically ask why certain representations hold mass resonance.

Why do particular images and narratives travel so well?

From settled fraud case to viral spectacle

Another case of digital blackface that captured public attention centred on the Minnesota Somali “Black fraud alert” saga. While still rooted in the same anti-Blackness that animated the “Black welfare queen” caricatures, this incident included Islamophobia and rising anti-immigrant sentiments.

The case traced back to a 2022 COVID-era malpractice scheme, which already led to arrests and convictions had. The scheme was led by Aimee Marie Bock, a white woman, and involved a network of Minnesotans, many of whom happened to be of Somali descent.

In December of 2025, U.S. President Donald Trump resurrected the settled case, weaponizing it and tethering it to his longstanding disdain for “third-world countries” and people from “shithole countries.” This rhetoric also folded into his hostility toward political opponents Minnesota Governor Tim Walz and Congresswoman Ilhan Omar.

What followed was not a serious discussion of fraud or of policy safeguards. Instead, the episode reinvigorated debates about white nationalism, racialized citizenship and racial eugenics.

Trump’s call to deport Somalis through ICE, declaring “I don’t want them in our country,” made this logic explicit. That most Minnesota Somalis hold U.S. citizenship, consistent with the 84 per cent citizenship rate, did little to disrupt the racist story being circulated.

Soon after the president’s comments, AI amplified the content. An AI-generated video circulated widely, animating the “Somali pirate” trope. It depicted Black men, presumed to be Somali, as migrants plotting to steal from taxpayers. In it we hear: “We don’t need to be pirates anymore. I found a better way. Government-funded daycare. We must go to Minnesota.”

This reference to child care echoed back to a viral video produced by a right-wing commentator claiming to expose another chapter in the “Somali fraud scandal,” this time targeting Somali-run child-care centres. The video prompted a statewide investigation, which ultimately found that all but one of the named centres were operating normally, with no clear evidence of fraud.

The “Black welfare queen” trope and the “Somali pirate” frame may seem to name different crises and different subjects, yet both draw from the same anti-Black racial grammar. In each case, Blackness is rendered fraudulent, criminal and morally deficient, cast as both a personal failing and national burden.

Why these ideas travel even when they’re false

These instances of digital blackface succeeded because misogynoir and anti-Blackness remain readily available discursive resources. AI merely accelerates their movement. The refusal of audiences to course-correct when fact-checked underscores how intuitive and pre-assembled racist and xenophobic scripts already are.

In both the SNAP-themed misogynoiric videos and the AI-generated “Somali pirate” content, nuance and factual accuracy were beside the point. What is at work instead is a broader political project tied to racial capitalism’s eugenicist logics.

As Black radical scholar Cedric Robinson argues, racism is not incidental to capitalism but foundational to the inequalities it requires. Poverty is misdirected as evidence of personal and community failings rather than the result of massive structural inequity. And when attached to the racialized poor, especially when Black, Muslim and immigrant, this logic crystallizes into “common sense.”

What is at stake with AI-enabled digital blackface is not only the amplification of racism, but the architecture of political life. In this climate, sober analysis and nuance recede, displaced by the numbing anxiety that structures contemporary public discourse.

The Conversation

Nadiya N. Ali has received funding from The Social Sciences and Humanities Research Council of Canada.

ref. How AI resurrects racist stereotypes and disinformation — and why fact-checking isn’t enough – https://theconversation.com/how-ai-resurrects-racist-stereotypes-and-disinformation-and-why-fact-checking-isnt-enough-270000

Canada’s ethnic and racial wage gap rivals it’s gender gap — but gets a fraction of the policy attention

Source: The Conversation – Canada – By Reza Hasmath, Professor in Political Science, University of Alberta

Canada has spent decades confronting the gender pay gap, enacting legislation and building public awareness around the fact that women earn about 84 cents for every dollar men make. That gap persists because of systemic barriers, and is wider for women who face multiple forms of discrimination.

Yet an equally significant wage penalty for ethnic and racial minorities rarely commands the same attention, and has not prompted a comparable policy response.

Racialized men earn just 78 cents for every dollar non-racialized men earn. Racialized women face a double penalty, earning only 59 cents. Post-COVID pandemic data shows this wage gap remains largely unchanged.

Both injustices are real and well-documented. So why has gender-based pay equity produced dedicated legislative tools, while ethnic and racial wage penalties continue to be addressed unevenly?

As an expert in public policy and ethnic studies, I see the answer lying not in the severity of the problem, but in the mechanisms that bring gender and ethno-racial wage discrepancies to light.

A century of feminist momentum

Progress on gender pay equity has been largely driven by sustained, organized activism. By the time wage discrimination entered mainstream political debate in the 1960s and 1970s, women’s groups had built national coalitions, testified before commissions and established gender inequality as an object of state intervention.

This momentum translated into policy. The 1977 Canadian Human Rights Act defined wage discrimination solely through a gender lens, making it discriminatory “for an employer to establish or maintain differences in wages between male and female employees.”

Ethnicity and race were absent from this definition — a gap that labour organizations and anti-racism advocates have long pushed to change.

The 1995 Employment Equity Act requires federally regulated employers to track representation and remove barriers for four designated groups: women, Indigenous Peoples, persons with disabilities and members of visible minorities. But it stopped short of requiring employers to correct wage disparities for ethno-racialized workers.

Gender pay equity later received its own legislative tool: the 2018 Pay Equity Act, which obliges federally regulated employers to proactively assess and remedy gender-based wage gaps for work of equal value.

While this framework has strengthened accountability, significant gaps remain, especially for women who experience intersecting forms of discrimination.

The legislative landscape is beginning to shift, but at a snail’s pace. The Employment Equity Act Review Task Force has recommended expanding designated groups to include Black workers and 2SLGBTQ+ workers. If implemented, these changes would mark the first major update to Canada’s equity regime in decades.

A delayed start for ethno-racial equity advocacy

While feminist organizations were building national advocacy networks in the mid-20th century, ethno-racialized communities faced a different political landscape.

Until the mid-1960s, Canada’s immigration system restricted non-European immigration, forcing many ethno-racialized communities to fight first for the right to be in Canada.

Because of these structural barriers, research on ethno-racial earnings disparities emerged far later. Economists began documenting the “colour of money” in Canadian labour markets only in the 1990s — decades after gender wage gaps had become a staple of academic research, public policy and media coverage.

Subsequent studies have shown persistent earnings penalties for ethno-racialized workers, with Black, West Asian and South Asian workers facing some of the steepest disadvantages.

In recent years, the federal government has introduced new institutional mechanisms, including Canada’s Anti‑Racism Strategy. Such initiatives have expanded data collection and supported community-based research, but they remain policy frameworks rather than enforceable tools because they lack binding obligations and compliance mechanisms.

The ‘visible minority’ problem

One of the challenges in achieving pay equity is the lack of categorical clarity in the term “visible minority,” a label frequently used by the Canadian government.

“Visible minority” functions as a bureaucratic catch-all. The last census recorded more than 450 distinct ethnic and cultural origins. Within this umbrella, labour market outcomes vary dramatically.

For example, university-educated Japanese Canadians often earn more than white Canadians, while those of Latin American ancestry earn 32 per cent less. Statistics Canada data shows that, even after controlling for education, Black male graduates earn 11 to 13 per cent less than non-racialized peers, while West Asian and Arab female graduates earn 15 to 16 per cent less.

Such variation makes collective advocacy more difficult. When some subgroups are advantaged, political attention can wane because the problem appears inconsistent.

Advocacy is most effective when it spotlights the worst-affected groups: Black Canadians, West Asian Canadians and Latin American Canadians. Organizations such as the Black Legal Action Centre and the Canadian Arab Institute demonstrate that targeted, community-specific advocacy is both possible and necessary.

Precarity as a silencer

Another reason the ethno-racial wage penalty is also muted by labour market precarity. Many ethno-racialized workers are overrepresented in temporary, low-wage or insecure forms of employment, including temporary foreign worker programs, non-unionized contract work and short-term service roles.

Research has repeatedly shown that newcomers and ethno-racialized workers face higher rates of job insecurity and lower access to employment protections.

For workers on conditional permits or pathways to permanent residency, speaking out about wage discrimination can risk contract termination or loss of status.

Under Canadian law, employers are required to measure ethno-racial representation but are not obligated to ensure ethno-racial pay equity. In effect, ethno-racialized workers are counted, but their wages remain unprotected.

Laws reflect which inequalities we care about

The ethnic and racial wage disparity in Canada is not inevitable; it is political. If sustained activism and legislation can tackle the gender pay gap from a policy perspective, the same tools can address ethno-racial wage penalties.

Community organizations have long pushed for this. Unions such as the Canadian Union of Public Employees explicitly frame discriminatory wage structures as a form of racism that must be confronted through collective bargaining and organizing.

The Canadian Labour Congress has called for stronger enforcement mechanisms, better data and explicit recognition of ethno-racial pay inequity in federal law.

These three shifts would make a meaningful difference:

  1. Move beyond “visible minority” categories and require wage reporting for specific groups most affected by disparities.

  2. Extend pay equity obligations to include ethnic and racial wage gaps, with the same proactive assessment and compliance mechanisms used for gender.

  3. Link wage equity to broader conversations about immigration, economic justice and Canada’s stated commitment to multiculturalism.

If fair and equitable pay is truly a Canadian value, attention to wage inequality cannot stop at gender. Both the gender gap and the ethnic and racial wage gap are products of systemic barriers.

Addressing these gaps requires extending equity measures to all sectors where ethnic or racial background continues to influence opportunity and compensation.

The Conversation

Reza Hasmath does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Canada’s ethnic and racial wage gap rivals it’s gender gap — but gets a fraction of the policy attention – https://theconversation.com/canadas-ethnic-and-racial-wage-gap-rivals-its-gender-gap-but-gets-a-fraction-of-the-policy-attention-275296

What can whale films tell us about Marineland’s threatened belugas and dolphins?

Source: The Conversation – Canada – By Matthew I. Thompson, Assistant Professor, Faculty of Media, Art, and Performance, University of Regina

The fate of 30 captive beluga whales and four dolphins hangs in the balance as Marineland in Niagara Falls awaits final approval for an export permit from the Canadian government. Marineland has threatened to euthanize the whales, as they can no longer afford to feed and house them since shuttering the park.

Marineland closed to the public in 2024 after years of declining ticket sales. An initial attempt to sell the whales to an amusement park in China was blocked by Canada’s Fisheries Minister, Joanne Thompson, in order to protect the whales from performing in captivity.

A more humane solution for many is The Whale Sanctuary Project, a 100-acre enclosed parcel of coastal waters in Nova Scotia. The sanctuary is not yet complete, however, and Marineland is pressing the federal government to allow them to export their whales to amusement parks in the United States.




Read more:
Marineland’s decline raises questions about the future of zoo tourism


My research examines how environmental politics get transformed into Hollywood movies. Captive whales and dolphins inspired the Save the Whales movement of the 1970s and 80s, which found itself expressed in films like The Day of the Dolphin and Orca. While these films were very sympathetic towards whales, their star cetaceans were captive orcas and dolphins.

The crisis at Marineland is emblematic of human-cetacean relations in the last hundred years. Whether capturing them on film, containing them in amusement parks or subjecting them to scientific experiments, our curiosity about whales and dolphins has compelled us to fetch them out of the ocean. The irony is that, once we have gotten a good look, we recognize their right to be free in an environment they are no longer equipped for.

Free Willy

The best example of this irony comes from the 1993 film Free Willy. In it, a young boy befriends, and then leads to freedom, a captive orca named Willy. A surprise hit at the box office, once the film was released many audience members wanted to know whether the whale who played Willy had also been set free.

Keiko, as that whale was known, was held in captivity in an under-resourced aquarium in Mexico City at the time. Like the belugas and dolphins at Marineland, Keiko was suffering some of the mental and physical afflictions associated with living in a poorly maintained tank. Since 2019, 19 belugas, one dolphin and one orca have died at Marineland.

Pressure from fans of the film led to the creation of the Free Willy-Keiko Foundation, and a plan to release Keiko back into the wild was developed.

Unfortunately for Keiko, and captive whales everywhere, once a cetacean has spent a significant amount of time in captivity, they are rarely able to survive reintroduction to the wild.

Millions of dollars were spent flying Keiko, first to Oregon, where he was taught to catch and eat live fish again, and then to Iceland where he was slowly introduced to a wild pod of orcas.

Keiko died of pneumonia in a Norwegian fjord only 18 months after his full release.

Keiko’s story highlights the problem faced by the belugas and dolphins at Marineland. Films and amusement parks expose millions of people to the intelligence, charisma and ineffability of cetaceans. This exposure transformed toothed-whales in the popular imagination from “wolves of the sea” to a “mind in the waters.” What were once thought of as dangerous gluttons who decimated commercial fish stocks became intelligent and benevolent friends.

Once this transformation has taken place in the popular imagination, the captive whales that inspired it are no longer congruent with the dominant opinion that intelligent and social creatures should not be taken from their families and held in small tanks.

What do the whales want?

The belugas and dolphins at Marineland are, from one perspective, victims of a law designed to protect them. Bill S-203, nicknamed the “Free Willy bill,” banned keeping captive whales and dolphins in Canada after passing into law in 2019. The whales at Marineland were grandfathered in, but further breeding was prohibited.

The ban on breeding means Marineland has to keep the male and female belugas separate from each other. According to one former trainer at the park, once the males were secluded from their female companions, they began aggressively raking each other with their teeth, leaving scars visible on their skin.




Read more:
The fate of Marineland’s belugas expose the ethical cracks in Canadian animal law


In 2021, Ontario’s Animal Welfare Service concluded an investigation into the park, declaring that all the marine mammals there were in distress due to poor water quality. Marineland has made efforts to improve the life-support systems since 2021, and the whale deaths at the park have not been linked to water quality. That being said, even when cetaceans are well cared for in captivity, they live shorter lives than their wild counterparts.

An ideal plan for the whales at Marineland would be made in consultation with them. Unfortunately, despite many imaginative attempts (some of which I detail in my forthcoming book), an interspecies communication breakthrough with cetaceans has yet to occur.

In the 1986 film Star Trek IV: The Voyage Home the crew of the Starship Enterprise is tasked with travelling back in time to collect a pair of captive humpback whales, as cetaceans are extinct in their present. Before beaming the animals up, however, Spock takes a swim with them to ask their permission. When Captain Kirk asks why he jumped into the whale tank, Spock replies:

“Admiral, If we were to assume that these whales are ours to do with as we please, we would be as guilty as those who caused their extinction.”

The Conversation

Matthew I. Thompson receives funding from the Social Sciences and Humanities Research Council of Canada.

ref. What can whale films tell us about Marineland’s threatened belugas and dolphins? – https://theconversation.com/what-can-whale-films-tell-us-about-marinelands-threatened-belugas-and-dolphins-274944

Three ways Canada can navigate an increasingly erratic and belligerent United States

Source: The Conversation – Canada – By Charles Conteh, Professor of Public Policy and Administration, Department of Political Science, Brock University

The United States Supreme Court recently struck down President Donald Trump’s sweeping global tariffs imposed under the country’s International Emergency Economic Powers Act. The court stated that the law, intended for national emergencies, does not grant the government the authority to impose tariffs.

In early 2025, Trump invoked the act to impose tariffs on Canada, along with Mexico and China, claiming the countries failed to stop illicit drug trafficking into the United States.

The ruling is the latest episode in a political dust-up between Canada and its neighbour to the south which recently involved the Gordie Howe International Bridge linking Ontario and Michigan.

More than steel or stone, the bridge is a symbol of a shared destiny that both respects and transcends differences. Despite their historical, institutional and political differences, Canada and the United States have bonded economically as neighbours, generating shared prosperity over the past two centuries.

In 2023, I wrote a book chapter Canada and the United States: A Symbiotic Relationship or Complex Entanglement? In that chapter, I posed a question: What if the United States becomes more aggressive and even less open to working co-operatively with Canada? To answer that question, Canada can draw lessons from its centuries-long coexistence with an often-erratic neighbour to successfully navigate the economic volatility of the present era.

While the recent Supreme Court ruling presents a setback for Trump, it is unlikely to stop him from using U.S. economic and military might as leverage against Canada and other countries. As Canada navigates this belligerent U.S. government, a lingering question is whether this history of interwoven reciprocity is deteriorating into a complex entanglement of vulnerability.

Two neighbours, different worlds

In the book chapter, I describe the Canada-U.S. relationship as a complex picture of deep interdependence, marked by significant power imbalances, and the creative ways Canada has learned to adapt and prosper.

The economic and political interests of the two countries have diverged and converged in undulating waves over the past 200 years. The two economies are inextricably intertwined across a range of sectors, from natural resources and agriculture to advanced manufacturing. Around 70 per cent of Canadian exports go to the U.S., and the share of Canada’s merchandise imports from south of the border was around 59 per cent in 2025.

But for Canada, the relationship is more than just economic interdependence. The U.S. has a population of about 342 million and a gross domestic product about 10 times larger than Canada’s. That sets the stage for an asymmetrical relationship whose threads are woven into the fabric of trade and geopolitics.

For Canada, this can sometimes feel like vulnerability. And that vulnerability is increasingly being exploited by the U.S., creating a general feeling of existential crisis and entrapment.

Nevertheless, Canada can draw from its centuries-long experience to navigate the current headwinds. While the smaller of the two neighbours, it is not entirely dependent on the U.S. for influencing global events or harnessing international opportunities.

Canada has been, and still is, an influential power on the international stage. As a G7 nation, Canada is one of the key pillars in the scaffolding of the global economy. This global standing and international influence give it some room to manoeuvre.

Navigating an existential crossroads

First, in the international arena, Canada must diversify economically and geopolitically to build strategic resilience. Prime Minister Mark Carney is already moving on this front by agreeing to ease mutual tariffs with China. With negotiations to renew the Canada-U.S.-Mexico Agreement (CUSMA) slated for this year, a diversified trading economy will give Canada much greater leverage to navigate the vulnerabilities of asymmetry.

Second, Canada should draw from its record of championing a rules-based order. In recent years, the country has had to skilfully navigate the crossroads of projecting and defending its global and liberal-democratic values during periods of U.S. flirtations with populism, isolationism and anti-international rhetoric. As a middle power, it derives its strength from the rule of law and by presenting a united front with like-minded nations. A wider set of partners means more buffers against trade policy whiplashes and geopolitical shocks from the U.S.

Third, domestically, loosening inter-provincial trade flows, updating anachronistic regulatory frameworks and pursuing digital data sovereignty strategies should be high priorities to fire the full engine of the economy.

Similarly, as I’ve previously argued, Canada should use its comparative advantages in natural resources to create a strong, well-connected critical minerals supply chain. This would give it significant strategic leverage in the global economy as the world shifts to electrification and renewable energy.

Over the past two centuries, Canada has mastered the complex dance of asymmetry. However, the current crisis takes on an existential proportion that will require new agility, courage and decisiveness. It is an inflection point that will mark a consequential shift for the next generation.

Canada’s nimbleness and agility in navigating this political moment could be an model for other countries that must manoeuvre a world where the old rules no longer apply. It can serve as an example for small and middle powers who must navigate a world where great powers are increasingly belligerent.

The Conversation

Charles Conteh receives funding from the Social Sciences and Humanities Research Council of Canada.

ref. Three ways Canada can navigate an increasingly erratic and belligerent United States – https://theconversation.com/three-ways-canada-can-navigate-an-increasingly-erratic-and-belligerent-united-states-276035

Why Stephen Colbert is right about the ‘equal time’ rule, despite warnings from the FCC

Source: The Conversation – USA – By Seth Ashley, Professor of Communication, Boise State University

CBS says it warned Stephen Colbert that an interview with a politician could trigger an FCC rule requiring broadcasters to give political candidates equal access to the airwaves. The Late Show With Stephen Colbert/YouTube

Talk show host Stephen Colbert made headlines on Feb. 17, 2026, when he wrapped a network statement in a dog-waste bag and tossed it in the trash.

He did it live, while on air.

The move came after CBS lawyers reportedly told him he could not broadcast a scheduled interview with Democratic Texas Senate candidate James Talarico on his show, Late Night with Stephen Colbert. According to Colbert, the network warned him that broadcasting the interview could trigger the Federal Communications Commission’s equal time rule, which requires broadcasters to allow political candidates equal access to the nation’s airwaves.

CBS said it gave Colbert “legal guidance” that airing the segment could raise equal time concerns and suggested other options.

Colbert countered that in decades of late-night television, he could not find a single example of the rule being enforced against a talk show interview. He ultimately posted his Talarico interview on YouTube instead, where broadcasting rules don’t apply.

As a media scholar, I believe Colbert is right about the law. Congress has deliberately protected editorial discretion to prevent equal time rules from chilling political speech. And the FCC has extended this privilege to shows like his.

To understand why, you have to go back to 1959 and to a forgotten fight over the role of broadcasting in a democratic society.

Amending ‘equal time’

Because the airwaves have been viewed as a scarce public resource, radio and television broadcasting have been regulated to balance the First Amendment rights of the press with public interest obligations. That includes the need to provide reasonable access to the airwaves for candidates for office – so citizens can hear what they have to say, whether in the form of paid advertising or unpaid news coverage.

After first appearing in the Radio Act of 1927, the equal time provision was codified in Section 315 of the Communications Act of 1934.

That law created the FCC and still governs the use of the nation’s airwaves today. It requires broadcast licensees to provide “equal opportunities” to legally qualified candidates in a given election if they allow one candidate to “use” their facilities. The requirement was intended to prevent broadcasters from favoring one candidate over another and to foster robust political debate that would serve the public interest.

But the statute did not clearly define what counted as a “use.”

That ambiguity was a known issue, but it came to a head in 1959, when Lar Daly, a fringe Chicago mayoral candidate, filed a complaint with the FCC. He argued that if stations aired news clips of his opponents – including the incumbent mayor – as part of their routine coverage, he was entitled to equal time on air.

A man holding a placard and wearing a hat speaks for another man in a black and white photo.
Sen. Charles Percy, R-Ill., left, talks with Lar Daly, who protests the lack of equal time on television.
AP Photo/Paul Cannon

The FCC agreed. And it created a ruling that meant even routine news coverage of a candidate could trigger equal time obligations.

Broadcasters immediately warned that the decision would make political journalism nearly impossible. If every news interview or campaign clip required providing comparable time to every rival – including minor or fringe candidates – stations would either have to book everyone or drastically scale back political coverage.

NBC president Robert Sarnoff issued a thinly veiled threat in a message that was not lost on politicians who would be affected by the change: “Unless the gag is lifted during the current session of the Congress, a major curtailment of television and radio political coverage in 1960 is inevitable.”

Later that year, Congress stepped in and amended Section 315 to create explicit exemptions for “bona fide” newscasts, news interviews, news documentaries and on-the-spot coverage of news events. As my colleague Tim P. Vos and I note in our research on the history of the amendment, Congress rejected calls to repeal equal time altogether.

Instead, lawmakers preserved the rule for candidate-sponsored advertising while shielding news programming. Persuaded by broadcasters, lawmakers determined that professional journalism, guided by norms of balance and fairness, would best serve democratic discourse.

In signing the 1959 legislation, President Dwight D. Eisenhower highlighted the “continuing obligation of broadcasters to operate in the public interest and to afford reasonable opportunity for the discussion of conflicting views on important public issues.”

Eisenhower concluded by appealing to the good intentions of the nation’s broadcasters: “There is no doubt in my mind that the American radio and television stations can be relied upon to carry out fairly and honestly the provisions of this Act without abuse or partiality to any individual, group, or party.”

The talk show exemption

Over the decades, the FCC has interpreted the 1959 exemptions broadly.

Programs ranging from Meet the Press to The Jerry Springer Show to The Tonight Show and other interview-based broadcasts have been treated as “bona fide news interviews,” even when hosted by comedians. That’s why Colbert’s claim that there is no enforcement history against late-night talk shows is accurate.

It’s important to remember that equal time still applies in other contexts. If a candidate purchases or receives airtime for an advertisement, opponents are entitled to comparable access.

Equal time also applies to non-exempt entertainment programming, such as Saturday Night Live. Donald Trump’s hosting gig on SNL in November 2015 triggered an equal time request from four opposing primary candidates. And NBC obliged by providing a comparable amount of airtime for their campaign messages.

A man in suit in tie speaks in front of a microphone.
Federal Communications Commission chairman Brendan Carr testifies before Congress in Washington on Jan. 14, 2026.
AP Photo/Jose Luis Magana

FCC Chairman Brendan Carr recently signaled he was considering eliminating the talk-show exemption, arguing that some programs are “motivated by partisan purposes.”

As of now, no legal change has occurred. And it seems to me that CBS has acted out of caution, responding to political and regulatory pressure rather than to an actual rule change. That makes this episode unusual: The equal time rule was perhaps applied indirectly, through corporate self-censorship, not through direct FCC enforcement.

Why this moment matters

Either way, the Colbert incident highlights the growing restrictions on editorial independence during the second Trump administration – either imposed by government threat or corporate fear.

Whether through direct regulatory intervention or indirect corporate influence, this incident and others like it show an increased willingness to interfere with the editorial independence of media producers.

The dispute is part of what some critics view as an ongoing effort by the Trump administration to silence criticism. Trump is no fan of Colbert and has targeted comedians before.

CBS already announced in 2025 that Colbert’s show will be canceled in May 2026, leading many to suggest CBS was trying to appease Trump and his FCC, particularly ahead of a then-pending merger that required FCC approval.

The 1959 amendment that created the equal time exemption aimed to preserve editorial independence and protect free expression by limiting equal time claims and ensuring vibrant political discourse. The decision reflected a judgment that professional editorial discretion, not mandatory equivalence, best served citizens.

If the FCC alters the exemption, it would represent a major shift in U.S. media policy and would almost certainly face legal challenges. The government has an important role to play in promoting free expression and protecting free speech, but this is a good time to be wary of efforts to alter regulations to control content.

The Conversation

Seth Ashley does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Why Stephen Colbert is right about the ‘equal time’ rule, despite warnings from the FCC – https://theconversation.com/why-stephen-colbert-is-right-about-the-equal-time-rule-despite-warnings-from-the-fcc-276559

La Stratégie de sécurité nationale des États-Unis : 2022 contre 2025, continuités et ruptures

Source: The Conversation – France in French (3) – By Olivier Sueur, Enseigne la compétition stratégique mondiale et les enjeux transatlantiques, Sciences Po

Aux États-Unis, chaque président a l’obligation de publier une Stratégie de sécurité nationale (National Security Strategy, NSS). Celle que l’administration Trump a rendue publique en novembre 2025 – un texte ouvertement partisan et centré sur les intérêts de Washington conformément à la doctrine « America First » – a heurté de front de nombreux responsables européens, qui se remémorent avec une certaine nostalgie l’époque de Joe Biden. Or, la comparaison de la NSS « Made in Trump » avec celle de l’administration Biden montre qu’il existe entre les deux documents plus de continuité qu’on le croit, même si une distinction majeure apparaît sur la question de l’idéologie sous-jacente.


La Stratégie de sécurité nationale des États-Unis publiée en novembre 2025 par l’administration Trump a déjà fait couler beaucoup d’encre, allant jusqu’à parler à propos de la relation à l’Europe d’un « divorce consommé, en attendant la séparation des biens ». Or, sa version précédente, publiée en octobre 2022 par l’administration Biden, constituait déjà une rupture sur bien des points : l’article que j’y avais consacré en janvier 2023 s’intitulait « Prendre acte de la fin d’un monde ».

Naturellement, le ton joue beaucoup : le document de l’administration de Joe Biden – « le bon » – était bien plus lissé et, soyons francs, plus aimable que celui de l’administration de Donald Trump – « la brute ». Néanmoins, si l’on cherche à dépasser la forme et à analyser le fond, ruptures et continuités s’affichent sous des couleurs nettement plus nuancées.

Des visions géopolitiques en réalité très proches

Les deux présidents démocrate et républicain, avec leurs administrations, font preuve d’une très grande continuité quant à, d’une part, la fin de la mondialisation économique et du libre-échange et, d’autre part, la priorisation des intérêts états-uniens à l’échelle mondiale.

La NSS 2022 était porteuse d’une virulente charge à l’encontre du bilan de la mondialisation des échanges économiques des trente dernières années et en tirait les conséquences : selon Jake Sullivan, conseiller à la Sécurité nationale de Joe Biden tout au long du mandat de celui-ci, « l’accès au marché a été pendant trente ans l’orthodoxie de toute politique commerciale : cela ne correspond plus aux enjeux actuels ».

L’enjeu clé est à présent la sécurité des chaînes d’approvisionnement, qui implique pour un certain nombre de produits stratégiques un découplage entre la Chine et les États-Unis : la sécurité économique redevient partie intégrante de la sécurité nationale.

Sur le plan domestique, le message était le grand retour de l’État dans l’économie avec la promotion d’« une stratégie industrielle et d’innovation moderne », la valorisation des investissements publics stratégiques et l’utilisation de la commande publique sur les marchés critiques afin de préserver la primauté technologique. La NSS 2025 ne dit pas autre chose en soulignant que « la sécurité économique est fondamentale pour la sécurité nationale » et reprend chaque sous-thème. La continuité est ici parfaite.

La priorisation géographique entre les deux NSS est également remarquable de continuité : 1) affirmation de la primauté de l’Indopacifique sur l’Europe ; 2) importance accordée aux Amériques, passées de la dernière place d’intérêt en 2015, derrière l’Afrique, à la troisième en 2022 et à la première en 2025.

Le premier point implique une concentration des efforts de Washington sur la Chine, et donc que le continent européen fasse enfin l’effort de prendre en charge sa propre sécurité afin de rétablir un équilibre stratégique vis-à-vis de la Russie. Le deuxième point se manifeste dans la NSS 2022 par la remontée des Amériques à la troisième place, devant le Moyen-Orient, et dans la NSS 2025 l’affirmation d’un « corollaire Trump à la doctrine Monroe », consistant à dénier à des compétiteurs extérieurs aux Amériques la possibilité d’y positionner des forces ou des capacités ou bien d’y contrôler des actifs critiques (tels que des ports sur le canal de Panama).

Dissensions idéologiques

Les deux présidents divergent sur deux points de clivage idéologique, à savoir la conception de la démocratie et le système international, y compris les questions climatiques.

La NSS 2022 avait réaffirmé le soutien sans ambiguïté des États-Unis à la démocratie et aux droits humains de par le monde, en introduisant néanmoins une nuance dans leurs relations internationales : sur le fondement du vote par 141 États de la résolution de l’ONU condamnant l’agression russe de l’Ukraine en mars 2022, l’administration Biden se montrait ouverte au partenariat avec tout État soutenant un ordre international fondé sur des règles telles que définies dans la Charte des Nations unies, sans préjuger de son régime politique.

La NSS 2025, au contraire, ne revendique rien de semblable : elle affirme avec force qu’elle se concentre sur les seuls intérêts nationaux essentiels des États-Unis (« America First »), proclame une « prédisposition au non-interventionnisme » et revendique un « réalisme adaptatif » (« Flexible Realism ») fondé sur l’absence de changement de régime politique, preuve en étant donnée avec le Venezuela, où le système chaviste n’a pas été renversé après l’enlèvement par les États-Unis de Nicolas Maduro.

De plus, la NSS 2025 redéfinit la compréhension même de la notion de démocratie autour d’une conception civilisationnelle aux contours très américains (liberté d’expression à la « sauce US », liberté religieuse et de conscience).

Second point de divergence : la NSS 2022 avait réaffirmé l’attachement de Washington au système des Nations unies, citées à huit reprises, et faisait de l’Union européenne (UE) un partenaire de choix dans un cadre bilatéral UE-États-Unis. C’est l’exact inverse dans la NSS 2025 : non seulement les Nations unies ne sont pas mentionnées une seule fois, mais les organisations internationales sont dénoncées comme érodant la souveraineté américaine.

En revanche, la primauté des nations est mise en exergue, et présentée comme antagoniste aux organisations transnationales. De plus, la notion d’allié est redéfinie à l’aune de l’adhésion aux principes démocratiques tels qu’exposés plsu haut. Cette évolution s’exprime plus particulièrement à l’égard de l’Europe.

La NSS 2025 et l’Europe

La partie de la NSS 2025 consacrée à l’Europe a été vivement critiquée dans les médias du Vieux Continent pour sa tonalité méprisante ; or le sujet n’est pas là. En effet, l’administration Trump opère une distinction fondamentale entre, d’une part, des nations qu’il convient de discriminer selon leur alignement avec la vision américaine de la démocratie et, d’autre part, l’UE, qu’il convient de détruire car elle constitue un contre-pouvoir nuisible. En d’autres termes, elle ne s’en prend pas à l’Europe en tant qu’entité géographique, mais à l’Union européenne en tant qu’organisation supranationale, les États-Unis se réservant ensuite le droit de juger de la qualité de la relation à établir avec chaque gouvernement européen en fonction de sa trajectoire idéologique propre.

La NSS 2025 exprime donc un solide consensus bipartisan sur les enjeux stratégiques auxquels sont confrontés les États-Unis et les réponses opérationnelles à y apporter, s’inscrivant ainsi dans la continuité du texte publié par l’administration Biden en 2022. Mais elle souligne aussi une divergence fondamentale sur les valeurs à mobiliser pour y faire face. C’est précisément ce que le secrétaire d’État Marco Rubio a rappelé dans son intervention lors de la conférence de Munich du 14 février 2026.

The Conversation

Olivier Sueur est chercheur associé au sein de l’Institut d’études de géopolitique appliquée (IEGA).

ref. La Stratégie de sécurité nationale des États-Unis : 2022 contre 2025, continuités et ruptures – https://theconversation.com/la-strategie-de-securite-nationale-des-etats-unis-2022-contre-2025-continuites-et-ruptures-276223

¿Por qué recordamos las cosas cada vez de un modo diferente? De Rosalía a la neurociencia cognitiva

Source: The Conversation – (in Spanish) – By Marta Reyes Sánchez, Profesora de Psicología de la Memoria y de Aprendizaje y Condicionamiento. Área de especialización: estrategias de metamemoria en contextos bilingües., Universidad Loyola Andalucía

ra2 studio/Shutterstock

“Siempre que me acuerdo de algo, siempre lo recuerdo un poco diferente”. Así canta Rosalía en Memória, uno de los temas de su último disco, Lux (2025). La letra de este fado, escrito, compuesto e interpretado junto a la portuguesa Carminho, muestra un acertado análisis de una característica de la memoria humana que la psicología y la neurociencia cognitiva llevan años estudiando.

Nuestra memoria no accede a los recuerdos como a un archivo que se abre intacto cada vez que lo consultamos. Recordar es un proceso activo y dinámico, que implica reconstruir y transformar los recuerdos.

Recordar no es reproducir, es reconstruir

Cada vez que evocamos un recuerdo, este entra en un estado temporalmente inestable, durante el que es susceptible de modificarse antes de “guardarse” de nuevo. Este proceso se conoce como “reconsolidación”. Cuando recordamos, el recuerdo se vuelve vulnerable: puede incorporar nueva información, cambiar algunos detalles o reinterpretarse emocionalmente.

Por ejemplo, no es raro que, cuando reproducimos una conversación que tuvimos con otra persona, con el tiempo incluyamos palabras o gestos que realmente nadie dijo. O que algo que en su momento nos pareció, vergonzoso, luego lo recordemos como divertido.

De este modo, el acto de recordar no supone acceder a una copia exacta del pasado, sino a una versión ligeramente actualizada, que seguirá modificándose en futuras evocaciones.




Leer más:
Memoria y errores judiciales en las ruedas de reconocimiento


Este proceso no ocurre siempre, ni de la misma manera. Los recuerdos más antiguos o más fuertes suelen ser más resistentes a esta inestabilidad y requieren períodos de recuperación más largos para entrar en reconsolidación. Por ejemplo, en un estudio se observó que, mientras que los recuerdos recientes o débiles solo necesitaban evocarse durante 3 minutos para volverse vulnerables, los más robustos requerían 10 minutos para llegar al estado de reconsolidación. Eso sí, una vez que alcanzaban este estado, podían igualmente debilitarse, fortalecerse o modificarse.

Proteger a través del cambio

Desde el punto de vista neurobiológico, cada vez que evocamos un recuerdo, el cerebro vuelve a activar las redes de neuronas que lo almacenan. Durante un breve intervalo, las conexiones entre esas neuronas (sinapsis) se vuelven más flexibles, lo que permite que el recuerdo pueda modificarse antes de estabilizarse otra vez. Así, la reconsolidación implica cambios sinápticos específicos, es decir, este proceso implica un fortalecimiento pero también un reajuste de las conexiones entre las neuronas, que son la base física de nuestros recuerdos.

Esto explica por qué nuestro propio recuerdo de un evento cambia a medida que lo recordamos de forma repetida. No es que nuestra memoria falle ni se deteriore; es que cada vez que rememoramos algo, evitamos que caiga en el olvido pero, a la vez, ese acto hace el recuerdo vulnerable. Es decir, el acto de recordar mantiene los recuerdos a costa de permitir cierta distorsión.

Ventajas de la reconsolidación

Que los recuerdos no se mantengan intactos toda la vida también tiene ventajas. Por ejemplo, en el ámbito psicoterapéutico puede aprovecharse el proceso de reconsolidación para intervenir en trastornos en los que aparecen recuerdos dolorosos o intrusivos, como el estrés postraumático, la ansiedad o la depresión.

Cuando un recuerdo se evoca en un entorno terapéutico seguro, la persona puede reinterpretarlo, reducir su carga emocional y aprender a gestionarlo de forma más adaptativa. Así que, aunque las distorsiones de la memoria a veces resulten molestas también ofrecen la oportunidad de aliviar el malestar asociado a experiencias pasadas.

¿Sigue siendo un recuerdo real?

En Memória, Rosalía continúa cantando “…y sea como sea ese recuerdo, siempre es verdad en mi mente”. Este verso coincide con una idea muy interesante que también revela la investigación: la confianza que sentimos en nuestros recuerdos no siempre refleja su precisión real.

En un estudio se analizaron los “recuerdos destello”, que son recuerdos muy vívidos y emocionalmente intensos, como saber dónde estábamos el fin de semana del 13-14 de marzo de 2020 cuando se decretó el estado de alarma por la covid-19. Estas memorias suelen sentirse especialmente nítidas y seguras.




Leer más:
Por qué movernos nos ayuda a borrar los malos recuerdos


Los autores del trabajo compararon lo que las personas decían recordar inmediatamente después de un acontecimiento impactante con lo que rememoraban meses o años más tarde. Observaron que, con el paso del tiempo, la consistencia de estos recuerdos disminuía: los detalles cambiaban, se perdían o se reorganizaban. Sin embargo, la confianza subjetiva de las personas en sus recuerdos permanecía alta. Creían recordar con la misma precisión, aunque objetivamente el recuerdo ya no fuera el mismo. Es decir, aunque lo que evocamos se haya transformado varias veces respecto al evento original, puede sentirse real.

Estudios como este demuestran que sentir un recuerdo como “muy real” o “muy nítido” no garantiza su veracidad.

Pero, si nuestros recuerdos cambian, ¿por qué no lo notamos? En parte, porque el propio proceso de reconsolidación refuerza la sensación de autenticidad. Tras recordar, el cerebro vuelve a estabilizar el recuerdo, y esa versión actualizada se siente tan sólida como la anterior. Además, con el tiempo, lo que evocamos es la última versión reconsolidada, no la experiencia inicial. Esto hace que el cambio sea gradual, acumulativo, y difícil de detectar.

La memoria y la identidad

Entendida así, la memoria no es solo un sistema para registrar el pasado, sino una herramienta para reconstruirlo y, con ello, construir nuestra identidad. Recordamos quiénes fuimos en función de quiénes somos ahora: nuestros objetivos, emociones y necesidades actuales. Por eso la memoria es flexible y adaptativa.

Cada recuperación de un recuerdo abre una oportunidad para integrar el pasado con el presente. Gracias a este proceso, mantenemos una sensación de coherencia personal, aunque se pierda exactitud en los detalles. La reconsolidación no solo estabiliza los recuerdos, sino que contribuye activamente a su mantenimiento a largo plazo, reforzándolos y actualizándolos con el paso del tiempo.

En una reciente entrevista, Carminho contaba que esta era precisamente su motivación al escribir el tema Memória. La importancia de “tener conciencia de mí misma, acordarme de quién soy, de donde vengo y cómo voy decidiendo el futuro”. En la canción, la protagonista le pregunta a su propio corazón (“recordar” viene del latín “recordāri”, “re-” de nuevo, “cordis”, corazón, que significa literalmente volver a pasar por el corazón) si aún sigue siendo la misma después de todo lo vivido:

“¿Será que tú me conoces / Que el tiempo pasa y no olvidas / Quién fui y quién soy al fin? / Oh, mi dulce corazón / Dime si sabes o no / ¿aún te acuerdas de mí?”

(En el portugués original: “Será que tu me conheces? /Que o tempo passa e não esqueces/ Quem eu fui e sou em fim? / Ó, meu doce coração / Diz-me se sabes ou não / Ainda te lembras de mim?”)

La memoria como proceso vivo

Lejos de ser un defecto, esta naturaleza cambiante de la memoria es una de sus mayores fortalezas. Nos permite aprender, adaptarnos y resignificar experiencias pasadas. Recordar es transformar.

Así que la próxima vez que un recuerdo vuelva a nuestra mente, sabremos que probablemente estemos accediendo a la última versión de un recuerdo vivo, moldeado cada vez que lo traemos al presente y que, aun siendo ligeramente distinto, se sentirá igual de convincente. Lo dicen la neurociencia cognitiva… y también Rosalía.

The Conversation

Las personas firmantes no son asalariadas, ni consultoras, ni poseen acciones, ni reciben financiación de ninguna compañía u organización que pueda obtener beneficio de este artículo, y han declarado carecer de vínculos relevantes más allá del cargo académico citado anteriormente.

ref. ¿Por qué recordamos las cosas cada vez de un modo diferente? De Rosalía a la neurociencia cognitiva – https://theconversation.com/por-que-recordamos-las-cosas-cada-vez-de-un-modo-diferente-de-rosalia-a-la-neurociencia-cognitiva-273851

La place méconnue des ingénieurs dans l’histoire du management

Source: The Conversation – France (in French) – By Matthieu Mandard, Maître de conférences en sciences de gestion, Université de Rennes 1 – Université de Rennes

Le Français Henri Fayol (1841-1925), ingénieur des mines, occupe une place importante dans l’histoire du management. Il est pourtant peu connu du grand public. Wikimedia commons, CC BY

On sait que les ingénieurs sont tous formés au management, et que nombre d’entre eux en seront des praticiens au cours de leur carrière. Ce qui est nettement moins connu, en revanche, c’est que le management est né au dix-neuvième siècle de l’activité même des ingénieurs, et que des ingénieurs ont systématiquement été au nombre de ses principaux théoriciens.


Les ingénieurs, dont le diplôme est réglementé en France par la Commission des titres d’ingénieurs (CTI), mais dont l’usage du titre au plan professionnel est libre, sont des spécialistes de la conception et de la mise en œuvre de projets techniques. Ils se doivent donc par définition de maîtriser les principes de base du management. Mais à quel point le métier d’ingénieur est-il lié à cette discipline, consacrée à l’élaboration de théories et de pratiques relatives au pilotage des organisations ?

À rebours d’une idée saugrenue, mais pourtant actuellement populaire, selon laquelle le management devrait son origine au nazisme, il s’avère que c’est plutôt du côté des ingénieurs qu’il faut regarder. Car leur activité est en effet historiquement liée à l’essor du management, comme nous l’écrivions dans un article récent dont nous retraçons les conclusions ici.

Révolutions technologiques et management

Le management a de toute évidence toujours existé, puisque de la construction des pyramides aux débuts de la première révolution industrielle en Angleterre au milieu du XVIIIᵉ siècle, en passant par l’édification des cathédrales, la conduite de grands projets a de tout temps nécessité le pilotage de collectifs importants. Mais il s’agissait alors d’initiatives locales, répondant à des contextes techniques et sociaux particuliers, qui ne faisaient pas encore système. Il faut en fait attendre le milieu du XIXᵉ siècle, et la deuxième révolution industrielle, pour voir le management constitué en tant que corpus de réflexion de portée générale relatif aux modalités de conduite des organisations.

Si le management apparaît à cette époque aux États-Unis, c’est en raison de l’essor du chemin de fer entamé au tournant du XIXᵉ siècle, qui implique la mise en place de grandes entreprises destinées à en assurer le pilotage de manière efficace et dans des conditions de sécurité satisfaisantes. Ce modèle de la grande entreprise se diffusera par la suite dans d’autres industries, telles que celle de l’acier, et se substituera progressivement aux petites entreprises artisanales jusqu’alors majoritaires.




À lire aussi :
Le management par objectifs n’est pas une invention nazie


À cette première révolution technologique en succède une seconde, amorcée au milieu du XIXᵉ siècle, qui résulte du développement du machinisme et de la hausse des rythmes de production associés. Elle donne elle aussi lieu à des changements dans les modes d’organisation des entreprises, puisque l’on assiste, à partir de 1890, à la naissance d’usines performantes dont le fonctionnement doit être rationalisé et planifié de manière à en tirer parti au maximum.

Vient ensuite la révolution des transports, soutenue par l’essor de l’automobile et par le déploiement d’infrastructures routières. Celle-ci entraîne, au milieu du XXᵉ siècle, l’extension géographique des opérations des entreprises et l’élargissement de leurs périmètres d’activité afin de satisfaire de nouveaux marchés, et l’apparition de l’entreprise qualifiée de multi-divisionnelle.

Enfin, la quatrième révolution technologique habituellement retenue apparaît après la Seconde guerre mondiale, avec l’essor de l’informatique et des télécommunications. Celle-ci renforce la tendance à la dispersion géographique des entreprises déjà amorcée précédemment, et donne lieu à partir des années 1990 à un modèle de management de référence qualifié d’organisation en réseau.

Au total, ce que montrent les observations, et notamment les travaux récents de Bodrožić et d’Adler, c’est que les révolutions dans les modes et les méthodes de gestion des organisations ont systématiquement été la résultante de révolutions technologiques, qui furent elles-mêmes le fruit des activités d’ingénieurs.

Ingénieurs et théorisation du management

Les changements dans le fonctionnement des entreprises induits par ces révolutions technologiques ont nécessité l’élaboration de nouveaux modèles de management, entendus comme des préceptes quant à la meilleure manière de piloter les activités des organisations. Et ici encore, il s’avère que des ingénieurs ont toujours fait partie des principaux théoriciens de ces modèles.

L’essor des grandes entreprises du secteur des chemins de fer au XIXᵉ siècle est ainsi accompagné par les réflexions d’ingénieurs de ce secteur, Benjamin H. Latrobe, Daniel C. McCallum, et J. Edgar Thomson, qui mettent au point ce qui est alors appelé les structures hiérarchico-fonctionnelles. Face aux conditions de travail particulièrement rudes induites par ce modèle, des programmes de réformes sociales sont ensuite développés par des auteurs tels que l’ingénieur George Pullman, fondateur de la compagnie de wagons-lits du même nom.

Le fonctionnement des usines performantes qui apparaissent ensuite est quant à lui rationalisé par des auteurs bien connus, dont on oublie parfois qu’ils étaient tous trois des ingénieurs : le Français Henri Fayol, et les États-Uniens Henry Ford et Frederick Taylor. Ici encore, la rudesse des conditions de travail engendrées par la mise en place d’un management scientifique, ou du travail à la chaîne, amène à des réflexions, quant à la manière de restaurer un climat social dégradé, en partie conduites par un ingénieur du nom de George Pennock, tombé dans l’oubli.

Et les pratiques managériales induites par les deux révolutions technologiques les plus récentes sont à l’avenant. L’entreprise multidécisionnelle fut en bonne partie théorisée par un ancien président de General Motors, l’ingénieur Alfred Sloan, et les problématiques de qualité qu’elle engendra furent largement examinées par un ingénieur de Toyota du nom de Taiichi Ohno. Quant au modèle de l’organisation en réseau, il fut notamment pensé et amendé par des ingénieurs spécialistes des systèmes d’information, tels que Michael Hammer et James Champy, ou par un ingénieur de Hewlett-Packard, Charles Sieloff.

Comment expliquer cette importance historique des ingénieurs en matière de théorisation du management ? Ceci tient à deux raisons pratiques. Leur proximité avec les évolutions technologiques de leur époque les amène à identifier précocement les problèmes managériaux que ces changements soulèvent, et les rend aussi mieux à même de résoudre.

Arts et métiers, Alumni 2020.

Ingénieur et management, un lien à cultiver

Ainsi, le métier d’ingénieur a toujours eu partie liée au management. On ne sera donc pas surpris d’apprendre que, en Europe, l’enseignement du management a d’abord été dispensé en école d’ingénieurs au milieu du XIXe siècle, avant d’être confié aux écoles de commerce au tournant du XXe siècle. Et c’est ce qui explique également que deux des plus prestigieuses écoles d’ingénieurs françaises, l’École des mines de Paris et l’École polytechnique, disposent chacune d’un laboratoire de recherche dédié aux sciences de gestion et du management (respectivement, le Centre de gestion scientifique et le Centre de recherche en gestion), ou que des spécialistes du management interviennent plus généralement dans toutes les écoles d’ingénieurs.

En raison des défis technologiques qui s’annoncent (robotisation, essor de l’IA, sobriété énergétique), il s’avère que ce lien ingénieurs/management doit être affirmé et cultivé. Car ces évolutions s’accompagneront nécessairement de changements organisationnels qu’il sera nécessaire de penser si nous ne voulons pas les subir.

The Conversation

Matthieu Mandard ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. La place méconnue des ingénieurs dans l’histoire du management – https://theconversation.com/la-place-meconnue-des-ingenieurs-dans-lhistoire-du-management-272951

What is ‘Edge AI’? What does it do and what can be gained from this alternative to cloud computing?

Source: The Conversation – France in French (2) – By Georgios Bouloukakis, Assistant Professor, University of Patras; Institut Mines-Télécom (IMT)

“Edge computing”, which was initially developed to make big data processing faster and more secure, has now been combined with AI to offer a cloud-free solution. Everyday connected appliances from dishwashers to cars or smartphones are examples of how this real-time data processing technology operates by letting machine learning models run directly on built-in sensors, cameras, or embedded systems.

Homes, offices, farms, hospitals and transportation systems are increasingly embedded with sensors, creating significant opportunities to enhance public safety and quality of life.

Indeed, connected devices, also called the Internet of Things (IoT), include temperature and air quality sensors to improve indoor comfort, wearable sensors to monitor patient health, LiDAR and radar to support traffic management, and cameras or smoke detectors to enable rapid-fire detection and emergency response.

These devices generate vast volumes of data that can be used to ‘learn’ patterns from their operating environment and improve application performance through AI-driven insights.

For example, connectivity data from wi-fi access points or Bluetooth beacons deployed in large buildings can be analysed using AI algorithms to identify occupancy and movement patterns across different periods of the year and event types, depending on the building type (e.g. office, hospital, or university). These patterns can then be leveraged for multiple purposes such as HVAC optimisation, evacuation planning, and more.

Combining the Internet of things and artificial intelligence comes with technical challenges

Artificial Intelligence of Things (AIoT) combines AI with IoT infrastructure to enable intelligent decision-making, automation, and optimisation across interconnected systems. AIoT systems rely on large-scale, real-world data to enhance accuracy and robustness of their predictions.

To support inference (that is, insights from collected IoT data) and decision-making, IoT data must be effectively collected, processed, and managed. For example, occupancy data can be processed to infer peak usage times in a building or predict future energy needs. This is typically achieved by leveraging cloud-based platforms like Amazon Web Services, Google Cloud Platform, etc. which host computationally intensive AI models – including the recently introduced Foundation Models.

What are Foundation Models?

  • Foundation Models are a type of Machine Learning model trained on broad data and designed to be adaptable to various downstream tasks. They encompass, but are not limited to, Large Language Models (LLMs), which primarily process textual data, but can also operate on other modalities, such as images, audio, video, and time series data.
  • In generative AI, Foundation Models serve as the base for generating content such as text, images, audio, or code.
  • Unlike conventional AI systems that rely heavily on task-specific datasets and extensive preprocessing, FMs introduce zero-shot and few-shot capabilities, allowing them to adapt to new tasks and domains with minimal customisation.
  • Although FMs are still in the early stages, they have the potential to unlock immense value for businesses across sectors. Therefore, the rise of FMs marks a paradigm shift in applied artificial intelligence.

The limits of cloud computing on IoT data

While hosting heavyweight AI or FM-based systems on cloud platforms offers the advantage of abundant computational resources, it also introduces several limitations. In particular, transmitting large volumes of IoT data to the cloud can significantly increase response times for AIoT applications, often with delays ranging from hundreds of milliseconds to several seconds, depending on network conditions and data volume.

Moreover, offloading data – particularly sensitive or confidential information – to the cloud raises privacy concerns and limits opportunities for local processing near data sources and end users.

For example, in a smart home, data from smart meters or lighting controls can reveal occupancy patterns or enable indoor localisation (for example, detecting that Helen is usually in the kitchen at 8:30 a.m. preparing breakfast). Such insights are best derived close to the data source to minimise delays from edge-to-cloud communication and reduce exposure of private information on third-party cloud platforms.




À lire aussi :
Cloud-based computing: routes toward secure storage and affordable computation


What is edge computing and edge AI?

To reduce latency and enhance data privacy, Edge computing is a good option as it provides computational resources (i.e. devices with memory and processing capabilities) closer to IoT devices and end users, typically within the same building, on local gateways, or at nearby micro data centres.

However, these edge resources are significantly more limited in processing power, memory, and storage compared to centralised cloud platforms, which pose challenges for deploying complex AI models.

To address this, the emerging field of Edge AI – particularly active in Europe – investigates methods for efficiently running AI workloads at the edge.

One such method is Split Computing, which partitions deep learning models across multiple edge nodes within the same space (a building, for instance), or even across different neighbourhoods or cities. Deploying these models in distributed environments is non-trivial and requires sophisticated techniques. The complexity increases further with the integration of Foundation Models, making the design and execution of split computing strategies even more challenging.

What does it change in terms of energy consumption, privacy, and speed?

Edge computing significantly improves response times by processing data closer to end users, eliminating the need to transmit information to distant cloud data centres. Beyond performance, edge computing also enhances privacy, especially with the advent of Edge AI techniques.

For instance, Federated Learning enables Machine Learning model training directly on local Edge (or possibly novel IoT) devices with processing capabilities, ensuring that raw data remain on-device while only model updates are transmitted to Edge or cloud platforms for aggregation and final training.

Privacy is further preserved during inference: once trained, AI models can be deployed at the Edge, allowing data to be processed locally without exposure to cloud infrastructure.

This is particularly valuable for industries and SMEs aiming to leverage Large Language Models within their own infrastructure. Large Language Models can be used to answer queries related to system capabilities, monitoring, or task prediction where data confidentiality is essential. For example, queries can be related to the operational status of industrial machinery such as predicting maintenance needs based on sensor data where protecting sensitive or usage data is essential.

In such cases, keeping both queries and responses internal to the organisation safeguards sensitive information and aligns with privacy and compliance requirements.

How does it work?

Unlike mature cloud platforms, such as Amazon Web Services and Google Cloud, there are currently no well-established platforms to support large-scale deployment of applications and services at the Edge.

However, telecom providers are beginning to leverage existing local resources at antenna sites to offer compute capabilities closer to end users. Managing these Edge resources remains challenging due to their variability and heterogeneity – often involving many low-capacity servers and devices.

In my view, maintenance complexity is a key barrier to deploying Edge AI services. At the same time, advances in Edge AI present promising opportunities to enhance the utilisation and management of these distributed resources.

Allocating resources across the IoT-Edge-Cloud continuum for safe and efficient AIoT applications

To enable trustworthy and efficient deployment of AIoT systems in smart spaces such as homes, offices, industries, and hospitals; our research group, in collaboration with partners across Europe, is developing an AI-driven framework within the Horizon Europe project PANDORA.

PANDORA provides AI models as a Service (AIaaS) tailored to end-user requirements (e.g. latency, accuracy, energy consumption). These models can be trained either at design time or at runtime using data collected from IoT devices deployed in smart spaces. In addition, PANDORA offers Computing resources as a Service (CaaS) across the IoT–Edge–Cloud continuum to support AI model deployment. The framework manages the complete AI model lifecycle, ensuring continuous, robust, and intent-driven operation of AIoT applications for end users.

At runtime, AIoT applications are dynamically deployed across the IoT–Edge–Cloud continuum, guided by performance metrics such as energy efficiency, latency, and computational capacity. CaaS intelligently allocates workloads to resources at the most suitable layer (IoT-Edge-Cloud), maximising resource utilisation. Models are selected based on domain-specific intent requirements (e.g. minimising energy consumption or reducing inference time) and continuously monitored and updated to maintain optimal performance.


A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!


The Conversation

This work has received funding from the European Union’s Horizon Europe research and innovation actions under grant agreement No. 101135775 (PANDORA) with a total budget of approximately €9 million and brings together 25 partners from multiple European countries, including IISC and UOFT from India and Canada.

ref. What is ‘Edge AI’? What does it do and what can be gained from this alternative to cloud computing? – https://theconversation.com/what-is-edge-ai-what-does-it-do-and-what-can-be-gained-from-this-alternative-to-cloud-computing-262357

Killer beetles in the baobabs: researcher warns of risk to African trees

Source: The Conversation – Africa – By Sarah Venter, Baobab Ecologist, University of the Witwatersrand

Baobabs aren’t supposed to fall. They can live for up to 2,500 years. Famous for their resilience, these huge trees have stood tall across Africa, weathering droughts and winds that flatten everything else.

A small population of 102 baobabs is also found in Oman on the south-eastern tip of the Arabian Peninsula, where baobabs were introduced over 1,500 years ago by traders from Africa.

However, several baobabs have recently collapsed and died in Oman, not from disease, drought or old age, but from infestation by a beetle that has suddenly proven deadly to baobab trees – the mango stem-borer (Batocera rufomaculata).

I’m a baobab ecologist who worked with two environmental scientists from Oman, Ali Salem Musallm Akaak and Mohammed Mubarak Suhail Akaak, to investigate how many trees had been infected by the beetle, how the infestation had affected the trees and how many had died as a result.

We surveyed 91 baobab trees in Oman and found that six had been killed by the beetle. A further 12 baobab trees were infested by the beetle’s larvae.

This is the first time that an insect has been found to kill adult baobab trees. The same beetle is known to damage and kill other species of trees.

Our findings have important implications for the conservation and management of baobabs throughout Africa. The mango-borer beetle has not been found in mainland Africa yet but it may become a new threat to baobabs if it disperses.

Our findings allow for early detection as well as research into effective ways to control the beetle before it spreads to Africa.

If the mango stem-borer were to reach mainland Africa, where the baobab is considered a keystone species, it could devastate both ecosystems and livelihoods. Baobabs have over 300 uses for people, including fibre made from the bark, food from the leaves and the fruit, which is harvested for its nutritious pulp and sold in local and global markets.

Meet the killer

The mango stem-borer is native to south-east Asia. Adults live for only two to three months, feeding on shoots and bark. During that time females can lay up to 200 eggs, cutting small slits in tree bark and sealing each egg inside.

The grubs or larvae spend almost a year hidden within the wood, tunnelling through the living tissue that carries water and nutrients. As they feed, they weaken the tree and eventually kill it.

This beetle has long been one of Asia’s most damaging fruit-tree pests. It attacks mango, jackfruit, mulberry and fig trees, often killing mature hosts. It spread to the Middle East, where it was first recorded in 1950 and has damaged fig plantations.

In 2021, an adult baobab in Wadi Hinna, a semi-arid valley in Oman’s Dhofar Mountains, collapsed and died. When researchers examined the fallen trunk, they discovered it was infested by mango stem-borer larvae.

By 2025, seven baobabs had died, and many more were infected, confirming that a seemingly innocuous fruit-tree pest had found a new host.




Read more:
Madagascar’s ancient baobab forests are being restored by communities – with a little help from AI


The very qualities that make baobabs extraordinary survivors in dry climates also make them ideal nurseries for borer beetle larvae. Their stored water, soft trunks and nutrient rich tissue feed and protect larvae for nearly a year until they mature.

As the larvae feed, they hollow out the interior of the baobab, leaving the outer bark intact and the infestation hidden, until the stem suddenly collapses.

Battling the beetle

When the first deaths were recorded, Oman’s Environment Authority launched an emergency control programme with help from local communities and researchers.

Infested trees were treated with systemic insecticides, larvae were manually removed from trunks, and light traps were set to attract and kill adult beetles at night. Tree stems were also coated with agricultural lime and fungicide to deter further egg-laying.

These actions seem to have slowed the outbreak, but they are labour-intensive and feasible only for a small area. Across a continent, such methods would be impossible to maintain.




Read more:
The secret life of baobabs: how bats and moths keep Africa’s giant trees alive


In Asia, scientists have identified natural enemies of the mango stem-borer, including parasitic mites and nematodes. These could be used as the base of a long-term biological control strategy.

My research argues that using biological control to stop the beetle reproducing must be developed as a priority before infestations cross into Africa.

Preventing a spread to Africa

Adult beetles can fly up to 14 kilometres in a single night, and global trade makes it easy for insects to cross borders unnoticed, hidden in plants and ornamentals destined for the agriculture and garden sector.




Read more:
Baobab trees all come from Madagascar – new study reveals that their seeds and seedlings floated to mainland Africa and all the way to Australia


The beetle already occurs on islands such as Madagascar, Réunion and Mauritius. Baobab researchers do not know if the mango stem-borer has attacked the local baobab populations of Madagascar, where the trees are an indigenous plant.

Early detection and prevention are far cheaper, and far more effective, than trying to stop an outbreak once it begins. Stronger biosecurity inspections and other measures are needed at African ports and borders to stop the beetle crossing borders, particularly in shipments of wood and live plants.

Collaboration between research institutions, agricultural departments and the baobab industry will also help: sharing data, testing biological controls and setting up monitoring systems before further outbreaks occur.

A warning – and an opportunity

The death of baobabs in Oman is more than a localised problem. It’s a warning of what could happen elsewhere if the beetle spreads unchecked.

But it also offers a chance to prepare. If African countries act now, tightening biosecurity, supporting research and raising awareness, they can protect one of the continent’s most iconic and life-sustaining trees before this threat ever reaches African shores.

The Conversation

Sarah Venter receives funding from the Baobab Foundation.
Sarah Venter is an advisory member of the African Baobab Alliance

ref. Killer beetles in the baobabs: researcher warns of risk to African trees – https://theconversation.com/killer-beetles-in-the-baobabs-researcher-warns-of-risk-to-african-trees-275715