When war looks like prophecy: How U.S. ‘end time’ narratives frame the war with Iran

Source: The Conversation – Canada – By André Gagné, Full Professor, Department of Theological Studies, Concordia University

After the United States and Israel began bombing Iran, killing some of the government’s top leaders — including its supreme leader, Ali Khamenei — some of U.S. President Donald Trump’s most loyal evangelical supporters quickly framed the war as a religious battle.

On the morning the attacks started, American evangelist Franklin Graham, president of the Billy Graham Evangelistic Association and founder of Samaritan’s Purse, posted on X: “Pray for our military in the operation against Iran, for President @realDonaldTrump, and that the people of Iran will be set free from the bondage of Islam.”

More than 1,000 civilians have been killed in Iran.

In my book, American Evangelicals for Trump. Dominion, Spiritual Warfare and the End Times, I explain how one of the contemporary interpretations of the “end times,” premillennial dispensationalism, remains widely influential among U.S. evangelicals.

Dispensations are seen as distinct periods in history, believed to be appointed by God to govern and organize the affairs of the world. Dispensationalism functions both as a method for interpreting the Bible and as a framework for understanding its history.

It teaches that Christ will return before the end times and inaugurate a thousand-year reign of peace and justice on Earth, commonly referred to as the Millennium

A systematic roadmap

Since the U.S. attack on Iran, Greg Laurie, founder and pastor of Harvest Christian Fellowship in California, has done a series of videos promoting his dispensational reading of current events. For Laurie, the next event on “God’s calendar” is known as the Rapture of the Church, when “born-again” believers are taken up to heaven.

In some readings of biblical prophecy, the Rapture is followed by the Great Tribulation, a seven-year period of turmoil. During that time, it is believed that the Jewish people will rebuild their temple in Jerusalem, divine judgments will strike the Earth and a political figure known as the Antichrist will rise to power.

The period culminates in a final confrontation between Jesus and the nations gathered by the Antichrist against Israel, called Armageddon. After that conflict, Christ is expected to establish his millennium of rule from Jerusalem, with the nations of the world ultimately brought under his authority.

Some evangelicals interpret the struggle between Iran and Israel through the same eschatological or “end times/end of history” lens.

According to their reading, Iran, known in antiquity as Persia, is identified in certain prophetic readings as one of the nations destined to play a role in a conflict described in Ezekiel 38–39, often called the battle of Gog and Magog.

The evangelical influencer Traci Coston also used a numerological twist to bolster characterizations of Trump as a new King Cyrus, a notion popularized by Lance Wallnau, an influential Pentecostal entrepreneur.

Coston wrote that Iran has been under “the oppressive Islamic regime” for 47 years and Trump is the 47th president. She likens Trump to “a pagan political leader” who God anoints “to break open gates and shift history for the sake of His people.”

Trump leveraged such views about himself and reposted on March 9 a 2007 prophecy by Kim Clement, a musician, pastor and popular prophetic figure who died in 2016, on his Truth Social account.

Spiritual warfare and an end times revival

Among some pro-Trump leaders in neo-Pentecostal and neo-Charismatic circles, the conflict with Iran is interpreted as spiritual warfare. They view global events as part of an ongoing struggle between divine and demonic forces and believe the prayers of Christians help push back what they see as evil powers.

Lou Engle, a U.S. neo-Charismatic prophet, posted one day before the attack, that in 2006, a group of 70 believers gathered in Boston for a prolonged period of prayer lasting 40 days and nights. He referenced the prophecy of Jeremiah 49:34-38, which names the judgment against Elam — an ancient region located in what is now southern Iran. Mobilizing this text, he said believers prayed “God would break the bow of Islam and set His throne in Iran.”

The Jewish feast of Purim, which was celebrated on March 2 and 3, was leveraged to explain the current conflict as spiritual warfare.

This framing is rooted in how some of these pro-Trump Pentecostal leaders see examples of cosmic battles in biblical texts, such as Daniel 10,12-21 which depicts supernatural forces at work in conflict among nations.




Read more:
What is the ‘Seven Mountains Mandate’ and how is it linked to political extremism in the US?


Citing such passages, influential proponents of this spiritual warfare way of thinking, like Wallnu, have argued that a “territorial spirit” fuels conflict. According to this view, only spiritual warfare can dislodge its influence; the reason to wage this spiritual battle is to dispel the nefarious influence of demonic forces that prevent the preaching of the gospel in closed areas.

Many of these pro-Trump neo-Pentecostal leaders adhere to a Victorious Eschatology, where the expansion of the Kingdom of God will be seen worldwide, and Christianity will rise in power, unity, maturity and glory before Christ’s return.

This framework is another end-times scenario, where some believe that a great spiritual awakening will occur, leading to massive conversions to Christianity.

Views not new

The idea of an end-times global awakening isn’t new. Early Pentecostals initially believed they lived in the end times and that the gift of tongues was given for the mission. Equipped with the supernatural capacity of speaking unlearned languages, they could now go throughout the world and preach the gospel before the return of Christ.

Later, the mid-20th century movement known as the New Order of the Latter Rain, a group that experienced a revival in 1948 in North Battleford, Sask., shared a similar outlook.

Their views ended up having a profound impact on the charismatic movement and the independent charismatic church movement globally. The New Order broke away from the classical Pentecostals in Canada, due to the “spiritual drought” they felt among Pentecostals and were now seeking a fresh spiritual experience.

‘Decisions on the basis of theology’

When U.S. Secretary of State Marco Rubio says that the Iranian regime makes “decisions on the basis of theology, their view of theology which is an apocalyptic one,” and Secretary of War Pete Hegseth states that “crazy regimes, like Iran, hell bent on prophetic Islamist delusions, cannot have nuclear weapons; it’s common sense,” the rhetoric frames Tehran as uniquely driven by religious extremism.

Yet pro-Trump Christian leaders have been welcomed into the Oval Office to lay hands on the president in prayer, while Trump has amplified prophetic messages about his rise to political power, signalling to his supporters that his presidency was divinely ordained.

The contrast is striking. When religious belief shapes the politics of rivals, it is labelled dangerous theology. Yet, when it appears in Washington, it is cast as divine providence.

The Conversation

André Gagné does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. When war looks like prophecy: How U.S. ‘end time’ narratives frame the war with Iran – https://theconversation.com/when-war-looks-like-prophecy-how-u-s-end-time-narratives-frame-the-war-with-iran-278292

What a gaping hole on a bridge reveals about aging infrastructure in Canada

Source: The Conversation – Canada – By Amirreza Torabizadeh, PhD candidate, Civil Engineering, Concordia University

A hole on the Sauvagine Bridge in Chateauguay, Québec, on March 4, 2026. (Eric Allard, Mayor of Chateauguay/Facebook)

When a large hole recently opened up in the deck of a bridge in Châteauguay, Québec, many people were understandably alarmed. Some residents even expressed hesitation about using the bridge after seeing images of exposed reinforcing steel and damaged concrete, and some told local media they were reluctant to cross it.

For drivers who cross the structure every day, the scene raises a question: how can such a dangerous incident suddenly happen on a bridge that is still open to traffic?

In reality, incidents like these rarely occur overnight. What the public sees as a sudden failure is often the visible result of deterioration that has been developing inside the structure for many years.

(CTV News)

Canadian bridges built decades ago

In most cases, what fails in situations like these is not the entire bridge but the deck — the concrete slab that vehicles drive on. While serious, a localized deck failure is different from the collapse of the bridge’s primary load-bearing structure.

Bridges are typically designed with multiple structural components that share loads, and engineers carefully assess these elements before deciding whether traffic can continue safely on part of the structure.

Still, the appearance of such damage highlights a broader challenge facing cities across Canada: aging infrastructure.

Many bridges currently in service were built decades ago, often in the 1950s, ’60s and ’70s. Over time, Canada’s unforgiving environmental conditions gradually deteriorate reinforced concrete structures.

In Québec, the combination of freeze–thaw cycles, water infiltration and the use of de-icing salts during winter creates particularly harsh conditions for bridge decks. Chlorides from road salts can penetrate the concrete and corrode the steel reinforcement inside. As corrosion progresses, the expanding rust causes cracking and separation within the concrete, sometimes leading to pieces of the deck breaking away.

Because this process develops internally, deterioration may not always be immediately visible from the surface. By the time cracks or holes appear, the damage could have progressed for years.

Identifying problems in advance

For engineers, one of the main challenges is to understand what damage means for the structural performance of the entire bridge. Visual inspections remain an essential tool to detect damage, but they don’t always reveal how deterioration affects the structural behaviour of the structure.

Our research has found that computational modelling may provide important insights about how to interpret the damage. Numerical simulations can mimic mechanisms like cracking, material degradation and changes in the interaction between steel reinforcement and concrete. By incorporating these effects into structural analyses, engineers can better estimate how much capacity an aging structure may still retain and identify potential vulnerabilities before they lead to more serious problems.




Read more:
Aging bridges are crumbling. Here’s how new technologies can help detect danger earlier


In addition to the visual inspections and monitoring data, computational modelling can also offer a cost-effective way to assess aging infrastructure. By using simulations, virtually created scenarios allow engineers to investigate how deterioration mechanisms — such as cracking, corrosion or degradation of the bond between reinforcement and concrete —influence the structural behaviour of a bridge.

These simulations can help evaluate how local damage, like deterioration in a bridge deck, may influence how the structure of the bridge could respond. Because these analyses rely primarily on computational tools rather than large-scale physical interventions, they can provide valuable insights at relatively low cost and help guide more informed decisions on maintenance and retrofitting.

Long-term safety

The incident in Châteauguay is yet another reminder that Canada’s infrastructure is rapidly aging. To ensure public safety, policymakers must be proactive instead of reactive.

To ensure safety, better tools must be developed to understand the hidden processes that gradually weaken structures over time. These tools will result in faster and more informed interventions for modern repair and for retrofitting.

As bridges across the country continue to age, ensuring their long-term safety will require a combination of regular inspection, timely maintenance, advanced engineering analysis and the application of effective strengthening techniques when needed.

Troubles may begin with big holes in bridge decks, but they ultimately point to the need for much larger conversations about how governments maintain and renew the infrastructure that millions of people rely on every day.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. What a gaping hole on a bridge reveals about aging infrastructure in Canada – https://theconversation.com/what-a-gaping-hole-on-a-bridge-reveals-about-aging-infrastructure-in-canada-278465

Billing students automatically for textbooks? Look elsewhere to solve affordability issues

Source: The Conversation – Canada – By Madelaine Vanderwerff, Associate Professor, University Library, Mount Royal University

Some Canadian universities are exploring automatic textbook billing programs — sometimes called academic materials programs or “inclusive access” programs.

These are institutional agreements with vendors to provide digital access to course materials, and automatically charge students for them as part of their fees.

Concerns with rising textbook costs are a reality for many students. These are often described as a pressing challenge in policy discussions.

The Canadian Association of Research Libraries has raised concerns about automatic textbook billing, including potential threats to faculty freedom, student privacy and access to diverse learning materials.

As professional academic librarians and faculty members, we share these concerns and agree with some student groups that argue automatic textbook billing programs are the wrong answer to textbook affordability challenges. Here’s why.

Role of libraries

Along with course instructors, librarians play critical roles in supporting student success by providing equitable access to learning materials.

Libraries are positioned to play a role in bolstering equitable access to learning materials for students and supporting faculty to use library collections as well as openly licensed and low- or no-cost materials for teaching and learning where possible.

Libraries have missions of building and providing access to collections that support teaching and learning, especially at small and mid-sized institutions where collections focus on undergraduate courses. They do this through things like textbook buying programs, eBook licensing and journal subscriptions, as well as working with instructors to make specific resources available.

Studies show that libraries’ lending or licensing of course materials can reduce student costs. It can also improve access and engagement with materials.

Claims, evidence and faculty perspectives

To better understand how Canadian universities are addressing course material affordability, we reviewed publicly available policies, campus initiatives and library programs at 22 mid-sized universities (with 7,000–20,000 full-time students).

We also conducted an online survey in English and French, targeting tenured, tenure-track and sessional faculty members at these universities. We received 322 responses from nine provinces, representing faculty from a range of academic disciplines and various levels of teaching experience.

Participants responded to multiple choice, item-ranking and open-ended questions about selecting course materials for a recently taught course and choosing materials more generally.

Faculty reported a wide range of approaches to course materials. Some faculty are embracing the digital learning tools offered by commercial textbook publishers to engage and motivate students, and value these tools as solutions to the workload challenges of teaching large classes.

But, while some rely on commercial textbooks, many use Open Educational Resources, library-licensed materials or freely available content. About 34 per cent of faculty reported not using a textbook at all, instead curating materials that are relevant to their course outcomes and that reduce or eliminate costs for students.

This range of practices reflects deliberate, faculty-led strategies, also documented elsewhere in other research, to balance affordability, accessibility, quality and relevance — approaches that institution-wide billing programs are not designed to support.

Measuring student success

Proponents of automatic textbook billing programs have highlighted benefits, including increased course completion rates, grades and cost savings.

However, faculty in our survey emphasized broader measures of student success. They cited outcomes that are well-established aims of university study, such as academic research competency, information literacy and critical thinking skills.

They also described their course materials in terms of helping students develop these skills, not just providing access to content.

Faculty noted challenges with digital-only textbooks, including poor readability, limited usability and restricted rental periods. Many highlighted the importance of diverse, relevant content from a range of creators reflecting diverse perspectives. These include print-based and independently published materials unavailable from major vendors.

While a majority of faculty in our study were concerned about cost, quality and relevance are their primary concern, and many are responding in context-sensitive ways to the related challenges of affordability pressures and shifting student learning needs.

Local solutions

In a publishing landscape marked by consolidation, elimination of print and restricted digital lending, automatic textbook billing programs may limit faculty choice, reduce diversity of materials and constrain equitable access.

Decisions about course materials reflect instructors’ professional judgment, subject-matter expertise and knowledge of student needs, and are central to academic freedom in teaching.

Faculty we surveyed were clear that course material selection should remain the purview of academic staff, not administrative or non-academic units. This includes the freedom to choose materials, determine how they are used and accessed and decide whether students must pay for materials to learn in their courses.




Read more:
Textbooks could be free if universities rewarded professors for writing them


The answer to textbook affordability isn’t to hand course material decisions to publishers. It’s in partnerships with libraries and investments in open education programs and services that help faculty make better, more equitable choices.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Billing students automatically for textbooks? Look elsewhere to solve affordability issues – https://theconversation.com/billing-students-automatically-for-textbooks-look-elsewhere-to-solve-affordability-issues-274076

As war raises oil prices, households pay while energy companies profit

Source: The Conversation – Canada – By Philippe Le Billon, Professor, Geography Department and School of Public Policy & Global Affairs, University of British Columbia

War is costly. The ongoing American-Israeli war on Iran is already reverberating through the global economy. For most people, including American citizens, it means higher fuel prices and greater economic uncertainty.

But for a narrower group of entities, war can also be extraordinarily profitable. Chief among them are segments of the United States oil and gas industry, which have already profited from Russian President Vladimir Putin’s decision to invade Ukraine and the ensuing sanctions on Russian oil and gas exports.

Now, the escalation of hostilities between the U.S., Israel and Iran has once again rattled global energy markets. Fighting and the closure of the Strait of Hormuz — one of the world’s most important oil shipping routes — have triggered what some have described as “the biggest oil disruption in history.”




Read more:
What is the Strait of Hormuz, and why does its closure matter so much to the global economy?


By early March, oil prices had briefly surged to US$119 per barrel — roughly double their level at the end of 2025. Prices have since settled at near US$100 a barrel, though volatility remains.

The escalation illustrates a familiar pattern in the political economy of fossil fuels: public costs paired with private windfalls.

The shock to global energy markets

Three months ago, few analysts expected 2026 to be a particularly profitable year for fossil fuel producers. Global supply was expanding rapidly and U.S. gas prices were expected to fall below US$3 a gallon.

Production growth in the U.S., Canada, Brazil and Argentina was colliding with weaker demand growth and the ineffectiveness of sanctions on exports from Russia, Iran and Venezuela. Many analysts warned of an emerging glut that could push prices downward. The International Energy Agency, for instance, projected a potential global oil surplus of nearly four million barrels per day in 2026.

That outlook changed abruptly following the U.S.-Israeli attack on Iran and the country’s retaliatory attacks on energy infrastructure and tanker traffic through the Strait of Hormuz, a strategic chokepoint that normally carries roughly one-fifth of the world’s traded oil and natural gas.

Even a partial disruption carries immediate consequences.

Though the strait has been a cornerstone of U.S. and world energy security for more than 60 years, the Donald Trump administration apparently underestimated the possibility that the Iranian regime would blockade it and pummel U.S.-allied countries in the region.

For consumers and most businesses, such price spikes function as a tax. Higher energy costs ripple through transport, food production, manufacturing and household budgets. American drivers feel the impact at the pump, while industries dependent on fuel or petrochemicals see their operating costs climb.

The hidden household costs of war

Estimates suggest that for every increase of US$10 per barrel, additional fuel costs amount to roughly US$560 per year per American household, including costs embedded in goods and services.

If prices remain at around US$86 instead of the expected US$51 forecast for 2026, the added burden could reach about US$2,000 per household annually.

These figures do not include the direct military expenditures, which were conservatively estimated at US$11 billion for the first week of strikes against Iran.

Even military spending of US$200 million per day (10 times less than the highest estimates at the current intensity) would amount to an additional cost of US$541 per household annually.

In short, a prolonged war combining high energy prices and sustained military expenditures would likely amount to between three to four per cent of the median U.S. household expenditure — roughly half of what many families spend annually on food or health care.

Lessons from recent wars

Recent history offers revealing precedents.

The costs of the Iraq War (2003 to 2011) for Americans has been estimated at about US$1.2-3 trillion in total long-term costs, equivalent to about US$16,700 to US$41,750 per household in current U.S. dollars. Yet the war did achieve the goal of reopening access to Iraqi oil fields for American oil companies.

More recently, the invasion of Ukraine by Russia cost an estimated one per cent of global GDP in 2022 and added 1.5 per cent to global inflation in 2022-23. Ukraine, of course, paid the largest price for the war, but direct impacts in Europe amounted to about 1 trillion euros.

Much of these costs ultimately translated into profits for oil and gas companies, especially liquefied natural gas (LNG) companies from the U.S. and producers in Australia and the Gulf states.

Profits on a single LNG shipment from the U.S. to Europe increased fivefold from about US$17 million to US$102 million.

A similar dynamic is now unfolding again.

Who really benefits from rising oil prices?

This time, with major Gulf states themselves exposed to the conflict, U.S. and other exporters less directly affected by the war may have even greater room to increase profits. American LNG companies could see windfalls approaching US$20 billion per month.

The main lesson is that petro-states, including Iran, Russia and the U.S., don’t hesitate to go to war partly because they believe oil revenues will bail them out, if not further enrich them.

In fact, in seeking to justify the attack on Iran and the continuation of the conflict, Trump argued that “the United States is the largest oil producer in the world, by far, so when oil prices go up, we make a lot of money.”

This, of course, depends on who “we” refers to. The populations of most petro-states have paid dearly for the wars involving their countries, whether it’s been Angola, Chad, Iraq, Libya, Nigeria, Russia, Syria and now Iran.

The U.S. has fared much better economically, but the gains have been mostly for its companies, not its population. Higher oil and natural gas prices generate enormous revenues for U.S. oil producers and LNG exporters along the Gulf Coast as global gas markets tighten. Investors and shareholders in these sectors stand to gain from rising margins and market valuations.

American households, however, face the opposite effect. Fuel prices rise. Inflationary pressures intensify. Transport and heating costs increase.

The gains accruing to producers are therefore not only partially financed by the most import-dependent countries with the least strategic reserves but also by low-income households who are stuck in a carbon-intensive economy they can least afford to escape.

The Conversation

Philippe Le Billon receives funding from SSHRC.

ref. As war raises oil prices, households pay while energy companies profit – https://theconversation.com/as-war-raises-oil-prices-households-pay-while-energy-companies-profit-278052

Will your electric car burst into flames? A solid-state battery would reduce the risk

Source: The Conversation – Canada – By Taiana Lucia Emmanuel Pereira, Postdoc Fellow, Chemistry, McMaster University

Canada recently signed a new trade agreement with China, reducing tariffs on up to 49,000 Chinese electric vehicles (EVs) each year. By 2030, half of these imported vehicles are anticipated to be “affordable EVs” costing less than $35,000.

This could make electric cars a more budget-friendly option for Canadians. However, public trust remains fragile, shaped largely by fears of EV battery fires.

In 2024, when a high-speed crash on Toronto’s Lake Shore Boulevard resulted in a Tesla bursting into flames, killing four passengers, the images circulated widely online. Months later, another Tesla caught fire on Highway 403 in Ontario, again shutting down traffic.

Evidence shows that the risks of EVs bursting into flames while you drive are low. However, these events have caused some public anxiety.

Solid-state batteries offer a promising new solution. They replace the flammable liquid in existing EV batteries with a solid electrolyte. This reduces the risks of spontaneous combustion when batteries are damaged or when they overheat.

Mercedes-Benz recently trialled an EQS sedan with a solid-state battery. The car drove 1,205 kilometres from Stuttgart in Germany to Malmö in Sweden without a charging stop.

Chinese automaker Chery says it plans to release its first electric vehicle with a solid-state battery later this year. The company says the design could boost energy density and cold-weather performance, with targeted ranges of up to 1,500 kilometres even in sub-zero temperatures.

Canadian researchers are also playing an important role in advancing solid-state technology. I am part of a research team at McMaster University studying battery chemistry at the atomic level to help turn solid-state batteries into a practical technology.

How do lithium-ion batteries work?

EVs rely on lithium-ion batteries rather than gasoline but the basic idea is similar. They store energy and release it when you need it. These batteries are made up of two electrodes: one positive (cathode) and one negative (anode), separated by an electrolyte that allows lithium ions to move between them.

When the battery powers a device or a vehicle, electrons flow through the external circuit to produce electricity, while lithium ions travel inside the battery from the anode to the cathode. Charging the battery simply reverses this process, pushing the lithium ions back to where they started.

How lithium-ion batteries work. (PhysicsLearning)

How much power a battery can deliver depends largely on how quickly and how many lithium ions can move between the two electrodes.

Today’s batteries rely on liquid electrolytes. This allows lithium ions to move easily and efficiently, giving the car quick acceleration, steady highway performance and consistent response when you press the pedal. How far a car can go on a single charge, however, depends mainly on how much lithium the electrodes can store.

Why do batteries catch fire?

When lithium-ion batteries are damaged or experience internal failures, they can overheat and enter a process known as thermal runaway. This can trigger intense fires that are hard to extinguish and may even reignite hours later.

A major reason is the liquid electrolyte, which is typically made from flammable organic solvents. If the battery overheats, the liquid can act as fuel, worsening the fire. Solid-state battery technology replaces the flammable liquid with a solid electrolyte.

This video shows five cylindrical lithium ion battery cells, forced into thermal runaway in test conditions.

Safer batteries with higher performance?

Solid electrolytes are generally non-volatile and mechanically robust. They reduce the risk of leakage and limit the formation of oxygen-rich volatile decomposition products. They can act as a physical barrier that slows the growth of the lithium filaments that can short-circuit a battery.

Together, these features reduce two major triggers of thermal runaway: internal short circuits and rapid heat-releasing chemical reactions in the electrolyte.

In our research group, we use solid-state nuclear magnetic resonance to understand how lithium ions move inside solid electrolytes. These experiments let us track both the local chemistry and the longer-range ion transport that determine how well a material will work in a battery. By linking these atomic-scale insights to battery performance, we can help design better solid electrolytes for safer electric vehicles.

Beyond safety, solid electrolytes also enable higher-performance batteries. They make it possible to use lithium metal anodes and high-voltage cathodes, which can increase energy density compared to today’s graphite-based batteries.

For EVs, this could mean longer driving range or smaller, lighter battery packs without sacrificing performance.

Why liquid electrolytes still dominate

Despite their safety, liquid electrolytes remain the industry standard.

They provide high ionic conductivity at room temperature, ensuring fast charging, strong acceleration and reliable performance across a wide range of conditions. They also connect well with the electrodes, allowing electricity to flow easily and keeping the battery’s design simpler. Decades of industrial experience have made them relatively inexpensive and easy to manufacture at scale.

In contrast, many solid electrolytes suffer from mechanical brittleness, which means they can crack during battery cycling and lose contact with the electrodes. In addition, solid electrolytes often struggle to make good connections with electrode materials, and chemical reactions at these interfaces can form resistive layers that reduce battery performance.

As a result, while solid-state batteries show great promise, liquid electrolytes have offered the best balance of performance, cost and ease of manufacturing in EVs to date.

Canada’s role in the transition

The recent trade agreement with China could give Canada faster access to the advanced battery technologies already developed at scale in China.

However, many Canadian researchers are already playing an important role in advancing EV battery technology by investigating new electrolyte materials and battery interfaces. Canadian federal programs are supporting battery research, clean-energy initiatives and domestic battery manufacturing, positioning Canada within the global EV transition.




Read more:
Lower tariffs on Chinese electric vehicles could boost adoption and diversify Canada’s trade


Advances in materials science, interface engineering and battery chemistry are improving the performance and durability of solid electrolytes. What once existed only in laboratories is moving into pilot production and early vehicle testing.

In the long run, solid electrolytes could reduce fire risk while enabling longer ranges and lighter battery packs, helping EVs become safer.

The Conversation

Taiana Lucia Emmanuel Pereira receives funding from NSERC and MITACS.

ref. Will your electric car burst into flames? A solid-state battery would reduce the risk – https://theconversation.com/will-your-electric-car-burst-into-flames-a-solid-state-battery-would-reduce-the-risk-277042

Israeli strikes on Tehran oil depot highlight gaps in international law

Source: The Conversation – Canada – By Alexandra R. Harrington, Visiting Scholar, McGill University Faculty of Law, Centre for Human Rights and Legal Pluralism, McGill University

One of the most alarming incidents to occur in the United States-Israel war against Iran was the recent bombing of a fuel depot in Tehran. Harrowing images showed toxic black smoke blanketing the skies above the city. Residents reported difficulty breathing and burning eyes and turned to wearing face masks.

Soot and toxic chemicals released from the bombing then came down on civilian populations as polluted “black rain,” further exacerbating the health and environmental impacts. In response to the attack, Iran’s foreign minister Abbas Araghchi said: “Israel’s bombings of fuel depots in Tehran violate international law and constitute ecocide.”

The attack on the fuel depot is more than a stark reminder of the costs of war. It also tells the story of a large gap in international legal protections for civilians and the environment from the targeting of facilities containing harmful chemicals that are not classified as chemical weapons.

The impacts of pollution and war are often indiscriminate and lasting. Beyond these images is the legacy of long-term damage to human health and the environment stemming from the targeting of such facilities, such as the refining plants targeted during the first Gulf War.

International law contains provisions against the use of chemical weapons in war. However, there is a gap in protections when dangerous toxins are released due to attacks on sites like fuel depots.

These gaps need to be addressed to protect civilians in war, and to uphold environment and human rights standards during wartime and once a conflict ends.

Gaps in the Geneva Conventions

Dark smoke fills Tehran’s sky after Israeli attacks on oil depots (The Independent).

My areas of research focus on international law, specifically environmental and human rights law and intersections with international organizations.

The Geneva Conventions and their protocols serve as the basis for international humanitarian law — the laws applicable to civilians, armed forces and combatants in times of conflict.

The Geneva Conventions are mostly geared toward human protection during combat, especially for civilians living in combat zones or occupied territories as well as for health-care providers and for injured combatants and prisoners of war. In particular, the Fourth Geneva Convention on Civilians provides basic living, health and access to justice protections for populations during wartime.

However, there is nothing in the conventions specifically about sites known to contain chemicals that would cause health or environmental impacts in the short and long-term.

While the Geneva Conventions forbid attacking hospitals, schools and infrastructure necessary for civilian life, they do not address fuel depots, waste management facilities or other sites where chemicals are routinely stored. And there are no requirements for warring entities to provide assistance to enemy territories damaged by attacks on such sites once hostilities cease.

Gaps in the Chemical Weapons Convention

Since 1997, the Chemical Weapons Convention (CWC) has governed the destruction and non-proliferation of chemical weapons.

This convention includes prohibitions on developing and manufacturing chemical weapons and outlines acceptable methods of reducing and eliminating chemical weapons stockpiles in signatory countries.

The CWC addresses facilities containing chemical weapons only in the context of safety until the chemicals can be destroyed.

It is an essential tool in protecting humanity from the development, stockpiling and use of chemical weapons. But the convention doesn’t cover all chemicals, nor does it address attacks on facilities containing chemicals that turn them into dangerous weapons against civilian populations.

Other agreements and treaties

There are also several multilateral environmental agreements that address chemicals in some form: the Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and their Disposal, the Rotterdam Convention on the Prior Informed Consent Procedure for Certain Hazardous Chemicals and Pesticides in International Trade, the Stockholm Convention on Persistent Organic Pollutants and the Minamata Convention on Mercury, as well as the Global Framework on Chemicals, a recently adopted soft law instrument.

These are critical agreements and instruments in many ways, but they focus on the production, use and transportation of chemicals. They do not address intentional acts of destruction during peacetime or conflict.

Additionally, there are many core human rights treaties that provide protections to all, especially women, children, those with disabilities and persons in situations of vulnerability. But these are not fully applicable in times of conflict.

Even at the end of a conflict, there are no provisions in these agreements that would impose liability or otherwise seek to address environmental damage from acts taken in wartime with lasting and generational impacts on the environment and human health.

Moving forward

Conflict is inherently intertwined with environmental damage and human suffering. This is particularly true today, when larger and more destructive weapons can cause lasting and even irreversible damage.

The international community has responded in the past to these harsh realities by enacting prohibitions aimed at protecting people. These provisions must be updated and expanded to ensure they remain applicable to current methods and ideologies used in warfare.

The targeting of the Tehran fuel depot demonstrates the need for changes to the Geneva Conventions at the very least, and also an appraisal of how to connect international environmental law and human rights law with the legacy of environmental damage in wartime.

Adding the crime of ecocide to the International Criminal Court’s jurisdiction could help.

But a larger conversation is needed to ensure that targeting facilities containing chemicals is not an accepted practice in future conflicts. The conversation is about the need for warfare to reflect what we have learned about the toxic legacies of indiscriminate use and targeting of chemicals as weapons of war, which scar the environment and humanity for generations.

The Conversation

Alexandra R. Harrington does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Israeli strikes on Tehran oil depot highlight gaps in international law – https://theconversation.com/israeli-strikes-on-tehran-oil-depot-highlight-gaps-in-international-law-278380

Can ‘mini brains’ replace lab animals? Organoids are changing how scientists study disease

Source: The Conversation – Canada – By Habib Rezanejad, Professor of cellular and molecular biology, MacEwan University

As a researcher, I still remember the discomfort I felt every time I had to sacrifice laboratory animals for an experiment. For decades, animals like mice, rats and pigs have been essential tools in biomedical research. Yet many scientists are asking whether better, more humane alternatives are possible.

Globally, it’s estimated that close to 200 million animals are used in laboratory research each year. While animal models have helped generate major medical breakthroughs, they don’t always reflect how human biology works.

An image of purple-coloured three-dimensional organoids generated from mouse pancreatic ductal epithelial cells.
Three-dimensional organoids generated from mouse pancreatic ductal epithelial cells.
(Habib Rezanejad)

New technologies are now offering scientists a promising alternative: organoids — tiny three-dimensional versions of human organs grown in the lab.

These “mini organs” are grown from human stem cells and can reproduce some of the complex cell types and interactions found in the body. Because they’re derived from human cells, organoids offer researchers a way to study human diseases more directly than traditional animal models.

Organoids for brain research

This approach is gaining huge attention in brain research. Compared with many other organs, the brain presents unique challenges for scientists.

Brain disorders are often complex and difficult to define precisely, involving subtle changes across many types of cells and neural circuits. At the same time, the brain is one of the least accessible organs in the body. Unlike blood or skin, living brain tissue cannot be easily sampled from healthy individuals.

Alzheimer’s disease, for example, is a growing global health concern, especially as populations age. Yet finding treatments has proven extremely difficult. A systematic review of research over two decades found that 98 Alzheimer’s drug candidates failed in clinical trials while only two succeeded. This highlights the enormous challenge of developing effective therapies.

What are brain organoids? (U.S. National Institute of Environmental Health Sciences)

One reason for this failure is that drugs that work in animals often do not work in humans. Mice and humans share many biological features, but important species differences mean animal models cannot fully reproduce the architecture and complexity of the human brain.

Traditional laboratory models also have other limitations. For example, many experiments rely on two-dimensional cell cultures, where cells grow in flat layers on plastic dishes. While useful, these systems lack the three-dimensional structure and cell-to-cell interactions found in real tissues. Without that complexity, they cannot accurately mimic many disease processes.

This is where organoids are transforming biomedical research.

In 2013, scientists demonstrated that brain organoids grown from human stem cells can self-organize into structures resembling parts of the developing brain. These “mini brains” contain multiple neural cell types and can mimic aspects of early brain development.

Researchers now use them to study conditions like autism, Alzheimer’s disease and amyotrophic lateral sclerosis (ALS).

Intestine, liver, kidney, pancreas

Beyond the brain, scientists have created organoids that resemble many other tissues, including the intestine, liver, kidney and pancreas.

An orange and green coloured image of mouse pancreatic ductal cells viewed through a microscope.
Mouse pancreatic ductal cells.
(Habib Rezanejad)

These models allow researchers to study diseases and test chemicals on human-like tissues rather than animals. For example, organoids could one day be used to screen chemicals for toxicity across multiple organs using cells derived from different individuals.

In my own research, my lab grows organoids from human and mouse pancreatic tissue to study cellular diversity and pancreatic inflammation. These models allow us to explore how different pancreatic cell types behave in three dimensions — something that would be impossible to observe in traditional flat cell cultures.

Potential for personalized medicine

A key advantage of organoids is their ability to capture human diversity. Laboratory mice used in experiments, on the other hand, are often genetically identical, which does not reflect the diversity of human populations.

Organoids can be grown from cells donated by individual patients, allowing researchers to study how diseases develop in different genetic backgrounds.

This opens the door to personalized medicine, where scientists test potential treatments on patient-derived organoids before giving them to patients.

Patient-derived organoids can predict how individuals might respond to certain drugs — for instance, responses to chemotherapy in metastatic colorectal cancer patients.

Organoids grown from many individuals, on the other hand, may provide a more realistic representation of how a population will respond to drugs. This helps researchers identify treatments that are more likely to succeed in clinical trials.

Overall, organoids are becoming powerful tools for drug discovery and safety testing.

Could this be the end of animal testing?

Some scientists believe organoids may replace animals altogether in certain areas of research. Organoid technology aligns with the “3Rs” principles in animal research — reduction, refinement and replacement — that aim to minimize the use of animals in science.

Reflecting this shift, the United States National Institutes of Health (NIH) recently announced it will prioritize research technologies that use human-based models rather than relying solely on animal experiments.

Pioneers in the field are optimistic. Hans Clevers, a leading scientist who helped develop gut organoids, has suggested that organoids could eventually replace animals in some forms of toxicology testing within the next few decades.

Still, organoids are not perfect

Although they are far more complex than traditional cell cultures, organoids remain simplified versions of real organs. Many lack blood vessels, which limits their size and maturity. They do not yet capture the full diversity of cell types found in human tissues, such as immune cells.

Studies have also shown that cells within organoids can experience stress due to laboratory growth conditions.

For now, organoids should be seen as powerful additions to the scientific toolbox rather than complete replacements for animal models.

Organoids are still an emerging technology, but they are already reshaping how scientists study human biology and disease. As the technology improves, these tiny lab-grown organs may help researchers reduce reliance on animal testing while bringing us closer to understanding — and treating — complex human diseases.

The Conversation

Habib Rezanejad receives Alberta Innovates Summer Research Studentship from Alberta Innovates for a research project at MacEwan University in 2025.

ref. Can ‘mini brains’ replace lab animals? Organoids are changing how scientists study disease – https://theconversation.com/can-mini-brains-replace-lab-animals-organoids-are-changing-how-scientists-study-disease-277611

What happens to your brain in nature? The neuroscience explained

Source: The Conversation – Canada – By Mar Estarellas, Postdocotoral Researcher, Social and Transcultural Psychiatry, McGill University

Yoho National Park, Field, Canada. (Unsplash/Hendrik Cornelissen)

Have you ever felt calmer almost as soon as you step into the woods? Or maybe noticed your busy mind soften as you look out at the sea?

We have known for some time, and many of us sense it intuitively, that spending time in nature is good for us. Neuroscience is now enabling us to understand why, and what the brain is actually doing in those moments.

I was recently a co-author on a scoping review of the neuroscience of nature exposure, published in Neuroscience and Biobehavioral Reviews, together with colleagues from the Universidad Adolfo Ibañez, Chile, and Imperial College London, U.K.

We reviewed 108 peer-reviewed neuroimaging studies on nature exposure and we found a consistent picture. When people spend time in natural settings (or even view pictures of the outdoors), the brain tends to show signs of reduced stress, lighter mental effort and better emotional regulation.

Increases in alpha and theta waves

Many of us live in environments that keep the brain on alert through traffic, screens, noise, crowding and constant decision-making. And while cities are awesome human creations, they place heavy demands on our attention and stress systems.

A car drives down a city street at night.
The noise, lights and movement on a city street can be exhausting for our brains.
(Unsplash/Howei Wang), CC BY

Nature, by contrast, seems to offer a very different kind of input, and the brain responds accordingly.

One of the strongest findings comes from electroencephalogram (EEG) studies, which measure electrical activity in the brain. Across many experiments that we reviewed, natural settings were linked to increases in alpha and theta waves. These are often associated with relaxed wakefulness. Studies also often found decreases in beta activity, which is more closely related to active effort or cognitive load.

Put simply, the brain looks less “overworked” in nature.

But that doesn’t mean that it becomes passive or sleepy. We could understand it more as shifting into a mode of attention that is gentler and less effortful. For example, watching leaves move, listening to water or noticing changes in light engages the mind in a different way that a crowded street or a stream of notifications does.

Some studies suggest these effects can happen quickly. In several EEG experiments — both in the real world and virtual reality — changes showed up within a few minutes, sometimes even as little as three minutes.

Longer exposure often produced stronger effects, especially once people spent around 15 minutes in a more immersive setting.

Reduced activity in the amygdala

We also reviewed studies using functional magnetic resonance imaging (fMRI). These measure changes in blood flow linked to neural activity, allowing us to see which regions become more or less active.

One interesting finding was a reduced activity in brain regions involved in stress and rumination after time in nature. The amygdala, which helps detect threats and responds to stress, becomes less active after natural exposure. So does the subgenual prefrontal cortex, a region linked to repetitive negative thinking.

Other fMRI work points to changes in networks involved in attention and self-related thought, including parts of the default mode network. These regions are involved in self reflection, mind wandering and what we could call “the background stream of inner experience.”

In natural contexts, they reorganized in ways that supported a calmer and less scattered mental state.

A cascade of natural effects

Looking across the 108 studies, we found a broadly consistent pattern, which we summarize as a cascade of effects through which nature may influence the brain.

First, natural settings are often easier for the brain to process. Their shapes and rhythms frequently follow fractal patterns, like those seen in coastlines, leaves and clouds, which the brain appears to process efficiently.

This may reduce sensory and perceptual load. As that happens, stress-related systems begin to settle and the body can shift out of fight-or-flight mode.

Attention may then become less effortful, and emotional processing more stable. We describe this as a pathway linking perception, stress regulation, attention and self-related processing.

Could nature shape your brain anatomy?

Beyond the immediate effects of exposure, there is also evidence that nature may shape the brain over longer timescales. Structural MRI studies suggest that living in greener areas is associated with differences in brain anatomy, including greater grey matter volume and better white matter integrity in some populations.

These studies are mostly correlational, so caution is needed. They cannot prove that nature alone caused those differences. But they do raise the possibility that small restorative effects, repeated over months and years, may accumulate in ways that support cognition and resilience.

So when time outdoors makes you feel lighter, clearer or less caught in your own head, know this feeling is worth trusting. Your brain seems to be changing state.

And perhaps understanding a little more about how nature works on us, and how we in turn relate to it, can also help us protect it. Caring for nature is also a way of caring for ourselves and for each other.

The Conversation

Mar Estarellas does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. What happens to your brain in nature? The neuroscience explained – https://theconversation.com/what-happens-to-your-brain-in-nature-the-neuroscience-explained-277332

How plant populations keep a genetic memory of the past

Source: The Conversation – Canada – By Daniel J Schoen, W.C. Macdonald Professor of Botany, McGill University

Jewelweed is found throughout eastern parts of North America. By studying jewelweed, researchers can understand how environmental changes affect plants over time. (Liz West), CC BY

Plants are usually seen as stationary life forms, quietly supporting environments. But plant communities and populations are far from static. They are constantly being shaped by the world around them.

One way is through local extinction — the loss of a local population from a specific patch of landscape. Another is through local colonization — the spreading or returning of plants to a landscape patch. In fact, many plant species are thought to be composed of metapopulations, which are sets of local populations connected by colonization, local extinction and population growth across a landscape.

If we were able to observe a metapopulation on the landscape over a time-lapse film covering several hundred years, we might see how the metapopulation changes and evolves as the film unfolds.

Of course, no such film exists, and time machines have not yet been invented, so understanding the forces that determine the history of metapopulations remains a challenge.

So, how can researchers understand the history of plant metapopulations? To do this, my colleague Rachel Toczydlowski and I turned to DNA sequence data with the plant jewelweed, or as botanists call it, Impatiens capensis.




Read more:
Tracking wildlife using DNA: A scientific breakthrough made with an Indigenous community


What is jewelweed?

Jewelweed is found throughout eastern parts of North America, including southern Québec and Ontario, and into the midwestern United States. This annual plant can form seeds through both cross-fertilization in normal flowers and through self-fertilization via a special type of closed flower that botanists refer to as a cleistogamous flower. Self-fertilization allows a single dispersing individual plant to establish a new population because it does not require a mate.

A bee gathering nectar from a yellow flower
Plants such as jewelweed also produce a type of flower that is closed and does not require pollinators.
(Rachel H. Toczydlowski)

The plant has the ability to found new populations from one or a few individuals via self-fertilization. The theory of metapopulations predicts that jewelweed as a species should be composed of a highly dynamic metapopulation where some patches have only been recently colonized, while others have been so for longer periods.

Jewelweed is typically found today in the forest fragments left over after agricultural and urban development. This type of habitat once existed as more continuous forest cover that blanketed much of the eastern and middle parts of North America prior to European colonization. This is especially true of the area in Wisconsin where we conducted our study, which is mostly farmland today, but retains island-like patches of more natural habitat.

Our research

We analyzed DNA sequence information obtained by sequencing the genomes of individual members of the population, and focused on how the individual genomes differ from one another.

Lining up the sequences and comparing them reveals differences in the DNA sequences of a sample of plants called single nucleotide polymorphisms (SNPs). These are positions in the sequence that differ from one another by possessing different nucleotides, the individual components of the sequence.

We then looked at the site frequency spectrum of each population, which shows how many SNPs are rare, how many are common and how many fall in between. Population founding events can change how common different SNPs are.

When a new population is founded by only a few individuals, many SNPs end up occurring at moderate frequencies. If the population then grows rapidly, new SNPs appear by mutation but remain rare, which changes the overall pattern of genetic variation.

And so, the form of the site frequency spectrum provides a kind of “genetic memory” of demographic events that gave rise to the individual-component plant populations within the metapopulation that we see today.

By applying this technique to DNA sequence data, we were able to detect significant differences in the demographic histories of the populations sampled. Some populations appeared to have only recently been founded and from a few individuals, whereas others appeared to be larger and more stable.

This pattern fits what is predicted by metapopulation theory for a species whose plants are capable of self-fertilization. The younger populations also exhibited loss of genetic diversity and higher levels of inbreeding compared to the older, more stable populations, which contained more diversity and were less inbred.

The higher inbreeding in younger populations suggests that they were founded by very few individuals and have not yet had enough time or gene flow from neighbouring populations to rebuild genetic diversity.

Importance for conservation

an orange-yellowish flower with red spots
Genetic variability is the foundation of adaptation. High levels of inbreeding can lead to weak or damaging traits being passed onto new generations.
(Unsplash/Jonathan Lim)

Genetic variability is the foundation of adaptation. High levels of inbreeding can lead to weak or damaging traits being passed on to new generations, which reduces the health of populations.

From a conservation standpoint, higher levels of diversity and lower levels of inbreeding are often desirable attributes.

They can enhance the adaptability and stability of populations, something that is becoming increasingly important as the climate changes.

Complicating our understanding of metapopulations, however, is the fact that not all landscapes are created equal. Some are more prone than others to disturbance and recovery, and when it comes to colonization of landscapes, not all plant species are created equal.

For instance, plants that can self-pollinate are more capable of founding a population from a few individuals and may be especially good colonizers compared with plants that require mates for seed production.

Understanding metapopulation history provides conservation managers with an additional perspective, especially when it comes to selecting the healthiest populations to conserve.

We may not have a time machine, but by analyzing the DNA of living populations, we can uncover the echoes of the past and understand their genetic implications for conservation.

The Conversation

Daniel J Schoen receives funding from the Natural Sciences and Research Council of Canada.

ref. How plant populations keep a genetic memory of the past – https://theconversation.com/how-plant-populations-keep-a-genetic-memory-of-the-past-276748

Canada’s immigration system is going digital, and accountability must keep pace

Source: The Conversation – Canada – By Marika Jeziorek, PhD Candidate in Global Governance, Balsillie School of International Affairs

Canada’s immigration system has long played a central role in the country’s economic and social development. Immigration accounts for most of Canada’s population growth and helps address labour market shortages across sectors. Settlement services support newcomers as they build lives and communities across the country.

As the number of people seeking to visit and immigrate to Canada grows, the way applications are handled is becoming more digital. This shift is reshaping how applicants interact with Immigration, Refugees and Citizenship Canada.

Through its Digital Platform Modernization initiative, the department has been rolling out new online client accounts, automated processing tools and digital visas as part of a broader multi-year transformation.

However, as processes become more automated, it can be harder to see how decisions are shaped or how to challenge them. The growing use of automated tools has long been linked to changes in accountability and institutional practice.

In immigration administration, these changes are becoming visible in everyday interactions with digital systems.

Operational pressures

Canada processes millions of temporary and permanent immigration applications each year, placing significant pressure on administrative systems and processing capacity.

Much of this work is currently done through the current Global Case Management System (GCMS). This system was introduced 20 years ago when immigration processing relied heavily on paper records and centralized operations.

However, the system was designed for a different era of immigration administration. When GCMS was first implemented, immigration processing still relied heavily on paper documentation and centralized administrative workflows.

Over the past two decades, both the scale and complexity of Canada’s immigration system have expanded significantly. As a result, IRCC has begun developing a new case-management platform intended to replace the GCMS as part of the department’s broader digitization initiative.

A digitized process

Immigration administration involves some of the most consequential decisions the federal government makes about a person’s legal status, mobility and protection. Today, most people applying for Canadian visas or residency begin the process online rather than through direct interaction with an immigration official.

Applicants typically interact first with online portals, automated messages and document-verification systems before their files reach a decision-maker.

These changes are institutional as well as administrative. Canadian immigration law now allows electronic systems to assist officers in processing applications and making decisions.

A woman sitting on a couch working on a laptop
As the number of people seeking to visit and immigrate to Canada grows, the way applications are handled is becoming more digital.
(Unsplash/Brooke Cagle)

Advanced data analytics help identify routine applications and speed up processing. Across the federal public service, similar technologies are increasingly used to support administrative decision-making.

Client portals also shape how applicants interact with the state by organizing how documents are submitted, how additional information is requested and how applicants receive updates about their cases.

Migration files are increasingly managed as digital case records that move across government systems. This means applications may be evaluated at several stages of processing rather than only when an officer makes the final decision.

For example, automated triage systems can classify applications as routine before an officer reviews them, while online client portals structure how applicants submit documents and receive updates throughout processing.

Automation and the applicant experience

While these reforms are designed to improve efficiency, they are also reshaping how applicants experience the immigration system.

For many migrants, immigration now involves prolonged interaction with digital systems, document verification procedures and automated communication channels. Applicants may need to repeatedly upload documents, respond to automated requests for additional information or monitor online portals for updates over months or even years.

Limited visibility into timelines or decision pathways can make it difficult to understand how cases are being assessed, resulting in prolonged uncertainty and new administrative burdens.

These experiences may appear to be technical issues, but they also reflect deeper changes in how immigration administration now operates.

The shift toward digital and automated administration also affects how immigration officers work. Automation and triage tools have been introduced to manage workload and improve productivity, while also reshaping how responsibilities are distributed across technical systems and administrative workflows.

Caseworkers are increasingly operating within infrastructures that pre-classify applications and structure decision processes. But instead of addressing the source of administrative strain, it’s simply reorganized.

Keeping automation accountable

Canada already has several oversight mechanisms in place, including algorithmic impact assessments required by directives on automated decision-making.

These measures represent meaningful progress toward responsible digital governance. However, as immigration administration becomes increasingly automated and platform-based, additional safeguards are needed to ensure accountability keeps pace.

Possible measures include expanding public documentation about automated triage systems, introducing independent review processes and ensuring clear pathways for human review. Such steps would better align digital modernization with Canada’s existing oversight frameworks for automated decision-making.

Canada’s immigration system is often described as rights-based and grounded in equity, fairness and inclusion. Maintaining public trust in that system depends on ensuring administrative decision systems remain transparent, contestable and accountable.

Automation and platform-based administration are reshaping Canada’s migration. Efficiency alone cannot sustain public trust. As Canada modernizes immigration administration, accountability must be built into digital systems as deliberately as the technologies themselves.

The Conversation

Marika Jeziorek does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Canada’s immigration system is going digital, and accountability must keep pace – https://theconversation.com/canadas-immigration-system-is-going-digital-and-accountability-must-keep-pace-276741