Federal power meets local resistance in Minneapolis – a case study in how federalism staves off authoritarianism

Source: The Conversation – USA – By Nicholas Jacobs, Goldfarb Family Distinguished Chair in American Government, Colby College; Institute for Humane Studies

Protesters against Immigration and Customs Enforcement march through Minneapolis, Minn., on Jan. 25, 2026. Roberto Schmidt/AFP via Getty Images

An unusually large majority of Americans agree that the recent scenes of Immigration and Customs Enforcement operations in Minneapolis are disturbing.

Federal immigration agents have deployed with weapons and tactics more commonly associated with military operations than with civilian law enforcement. The federal government has sidelined state and local officials, and it has cut them out of investigations into whether state and local law has been violated.

It’s understandable to look at what’s happening and reach a familiar conclusion: This looks like a slide into authoritarianism.

There is no question that the threat of democratic backsliding is real. President Donald Trump has long treated federal authority not as a shared constitutional set of rules and obligations but as a personal instrument of control.

In my research on the presidency and state power, including my latest book with Sidney Milkis, “Subverting the Republic,” I have argued that the Trump administration has systematically weakened the norms and practices that once constrained executive power – often by turning federalism itself into a weapon of national administrative power.

But there is another possibility worth taking seriously, one that cuts against Americans’ instincts at moments like this. What if what America is seeing is not institutional collapse but institutional friction: the system doing what it was designed to do, even if it looks ugly when it does it?

For many Americans, federalism is little more than a civics term – something about states’ rights or decentralization.

In practice, however, federalism functions less as a clean division of authority and more as a system for managing conflict among multiple governments with overlapping jurisdiction. Federalism does not block national authority. It ensures that national decisions are subject to challenge, delay and revision by other levels of government.

Dividing up authority

At its core, federalism works through a small number of institutional mechanics – concrete ways of keeping authority divided, exposed and contestable. Minneapolis shows each of them in action.

First, there’s what’s called “jurisdictional overlap.”

State, local and federal authorities all claim the right to govern the same people and places. In Minneapolis, that overlap is unavoidable: Federal immigration agents, state law enforcement, city officials and county prosecutors all assert authority over the same streets, residents and incidents. And they disagree sharply about how that authority should be exercised.

Second, there’s institutional rivalry.

Because authority is divided, no single level of government can fully monopolize legitimacy. And that creates tension. That rivalry is visible in the refusal of state and local officials across the country to simply defer to federal enforcement.

Instead, governors, mayors and attorneys general have turned to courts, demanded access to evidence and challenged efforts to exclude them from investigations. That’s evident in Minneapolis and also in states that have witnessed the administration’s deployment of National Guard troops against the will of their Democratic governors.

It’s easy to imagine a world where state and local prosecutors would not have to jump through so many procedural hoops to get access to evidence for the death of citizens within their jurisdiction. But consider the alternative.

If state and local officials were barred without consent from seeking evidence – the absence of federalism – or if local institutions had no standing to contest how national power is exercised there, federal authority would operate not just forcefully but without meaningful political constraint.

Protesters fight with law enforcement as tear gas fills the air.
Protesters clash with law enforcement after a federal agent shot and killed a man on Jan. 24, 2026, in Minneapolis, Minn.
Arthur Maiorella/Anadolu via Getty Images

Third, confrontation is local and place-specific.

Federalism pushes conflict into the open. Power struggles become visible, noisy and politically costly. What is easy to miss is why this matters.

Federalism was necessary at the time of the Constitution’s creation because Americans did not share a single political identity. They could not decide whether they were members of one big community or many small communities.

In maintaining their state governments and creating a new federal government, they chose to be both at the same time. And although American politics nationalized to a remarkable degree over the 20th century, federal authority is still exercised in concrete places. Federal authority still must contend with communities that have civic identities and whose moral expectations may differ sharply from those assumed by national actors.

In Minneapolis it has collided with a political community that does not experience federal immigration enforcement as ordinary law enforcement.

The chaos of federalism

Federalism is not designed to keep things calm. It is designed to keep power unsettled – so that authority cannot move smoothly, silently or all at once.

By dividing responsibility and encouraging overlap, federalism ensures that power has to push, explain and defend itself at every step.

“A little chaos,” the scholar Daniel Elazar has said, “is a good thing!”

As chaos goes, though, federalism is more often credited for Trump’s ascent. He won the presidency through the Electoral College – a federalist institution that allocates power by state rather than by national popular vote, rewarding geographically concentrated support even without a national majority.

Partisan redistricting, which takes place in the states, further amplifies that advantage by insulating Republicans in Congress from electoral backlash. And decentralized election administration – in which local officials control voter registration, ballot access and certification – can produce vulnerabilities that Trump has exploited in contesting state certification processes and pressuring local election officials after close losses.

Forceful but accountable

It’s helpful to also understand how Minneapolis is different from the most well-known instances of aggressive federal power imposed on unwilling states: the civil rights era.

Hundreds of students protest the arrival of a Black student to their school.
Hundreds of Ole Miss students call for continued segregation on Sept. 20, 1962, as James Meredith prepares to become the first Black man to attend the university.
AP Photo

Then, too, national authority was asserted forcefully. Federal marshals escorted the Black student James Meredith into the University of Mississippi in 1962 over the objections of state officials and local crowds. In Little Rock in 1957, President Dwight D. Eisenhower federalized the Arkansas National Guard and sent in U.S. Army troops after Gov. Orval Faubus attempted to block the racial integration of Central High School.

Violence accompanied these interventions. Riots broke out in Oxford, Mississippi. Protesters and bystanders were killed in clashes with police and federal authorities in Birmingham and Selma, Alabama.

What mattered during the civil rights era was not widespread agreement at the outset – nationwide resistance to integration was fierce and sustained. Rather, it was the way federal authority was exercised through existing constitutional channels.

Presidents acted through courts, statutes and recognizable chains of command. State resistance triggered formal responses. Federal power was forceful, but it remained legible, bounded and institutionally accountable.

Those interventions eventually gained public acceptance. But in that process, federalism was tarnished by its association with Southern racism and recast as an obstacle to progress rather than the institutional framework through which progress was contested and enforced.

After the civil rights era, many Americans came to assume that national power would normally be aligned with progressive moral aims – and that when it was, federalism was a problem to be overcome.

Minneapolis exposes the fragility of that assumption. Federalism does not distinguish between good and bad causes. It does not certify power because history is “on the right side.” It simply keeps power contestable.

When national authority is exercised without broad moral agreement, federalism does not stop it. It only prevents it from settling quietly.

Why talk about federalism now, at a time of widespread public indignation?

Because in the long arc of federalism’s development, it has routinely proven to be the last point in our constitutional system where power runs into opposition. And when authority no longer encounters rival institutions and politically independent officials, authoritarianism stops being an abstraction.

The Conversation

Nicholas Jacobs does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Federal power meets local resistance in Minneapolis – a case study in how federalism staves off authoritarianism – https://theconversation.com/federal-power-meets-local-resistance-in-minneapolis-a-case-study-in-how-federalism-staves-off-authoritarianism-274685

The Supreme Court may soon diminish Black political power, undoing generations of gains

Source: The Conversation – USA – By Robert D. Bland, Assistant Professor of History and Africana Studies, University of Tennessee

U.S. Rep. Cleo Fields, a Democrat who represents portions of central Louisiana in the House, could lose his seat if the Supreme Court invalidates Louisiana’s congressional map. AP Photo/Gerald Herbert

Back in 2013, the Supreme Court tossed out a key provision of the Voting Rights Act regarding federal oversight of elections. It appears poised to abolish another pillar of the law.

In a case known as Louisiana v. Callais, the court appears ready to rule against Louisiana and its Black voters. In doing so, the court may well abolish Section 2 of the Voting Rights Act, a provision that prohibits any discriminatory voting practice or election rule that results in less opportunity for political clout for minority groups.

The dismantling of Section 2 would open the floodgates for widespread vote dilution by allowing primarily Southern state legislatures to redraw political districts, weakening the voting power of racial minorities.

The case was brought by a group of Louisiana citizens who declared that the federal mandate under Section 2 to draw a second majority-Black district violated the equal protection clause of the 14th Amendment and thus served as an unconstitutional act of racial gerrymandering.

There would be considerable historical irony if the court decides to use the 14th Amendment to provide the legal cover for reversing a generation of Black political progress in the South. Initially designed to enshrine federal civil rights protections for freed people facing a battery of discriminatory “Black Codes” in the postbellum South, the 14th Amendment’s equal protection clause has been the foundation of the nation’s modern rights-based legal order, ensuring that all U.S. citizens are treated fairly and preventing the government from engaging in explicit discrimination.

The cornerstone of the nation’s “second founding,” the Reconstruction-era amendments to the Constitution, including the 14th Amendment, created the first cohort of Black elected officials.

I am a historian who studies race and memory during the Civil War era. As I highlight in my new book “Requiem for Reconstruction,” the struggle over the nation’s second founding not only highlights how generational political progress can be reversed but also provides a lens into the specific historical origins of racial gerrymandering in the United States.

Without understanding this history – and the forces that unraveled Reconstruction’s initial promise of greater racial justice – we cannot fully comprehend the roots of those forces that are reshaping our contemporary political landscape in a way that I believe subverts the true intentions of the Constitution.

The long history of gerrymandering

Political gerrymandering, or shaping political boundaries to benefit a particular party, has been considered constitutional since the nation’s 18th-century founding, but racial gerrymandering is a practice with roots in the post-Civil War era.

Expanding beyond the practice of redrawing district lines after each decennial census, late 19th-century Democratic state legislatures built on the earlier cartographic practice to create a litany of so-called Black districts across the postbellum South.

The nation’s first wave of racial gerrymandering emerged as a response to the political gains Southern Black voters made during the administration of President Ulysses S. Grant in the 1870s. Georgia, Alabama, Florida, Mississippi, North Carolina and Louisiana all elected Black congressmen during that decade. During the 42nd Congress, which met from 1871 to 1873, South Carolina sent Black men to the House from three of its four districts.

A group portrait depicts the first Black senator and a half-dozen Black representatives.
The first Black senator and representatives were elected in the 1870s, as shown in this historic print.
Library of Congress

Initially, the white Democrats who ruled the South responded to the rise of Black political power by crafting racist narratives that insinuated that the emergence of Black voters and Black officeholders was a corruption of the proper political order. These attacks often provided a larger cultural pretext for the campaigns of extralegal political violence that terrorized Black voters in the South, assassinated political leaders, and marred the integrity of several of the region’s major elections.

Election changes

Following these pogroms during the 1870s, southern legislatures began seeking legal remedies to make permanent the counterrevolution of “Redemption,” which sought to undo Reconstruction’s advancement of political equality. A generation before the Jim Crow legal order of segregation and discrimination was established, southern political leaders began to disfranchise Black voters through racial gerrymandering.

These newly created Black districts gained notoriety for their cartographic absurdity. In Mississippi, a shoestring-shaped district was created to snake and swerve alongside the state’s famous river. North Carolina created the “Black Second” to concentrate its African American voters to a single district. Alabama’s “Black Fourth” did similar work, leaving African American voters only one possible district in which they could affect the outcome in the state’s central Black Belt.

South Carolina’s “Black Seventh” was perhaps the most notorious of these acts of Reconstruction-era gerrymandering. The district “sliced through county lines and ducked around Charleston back alleys” – anticipating the current trend of sophisticated, computer-targeted political redistricting.

Possessing 30,000 more voters than the next largest congressional district in the state, South Carolina’s Seventh District radically transformed the state’s political landscape by making it impossible for its Black-majority to exercise any influence on national politics, except for the single racially gerrymandered district.

A map showing South Carolina's congressional districts in the 1880s.
South Carolina’s House map was gerrymandered in 1882 to minimize Black representation, heavily concentrating Black voters in the 7th District.
Library of Congress, Geography and Map Division

Although federal courts during the late 19th century remained painfully silent on the constitutionality of these antidemocratic measures, contemporary observers saw these redistricting efforts as more than a simple act of seeking partisan advantage.

“It was the high-water mark of political ingenuity coupled with rascality, and the merits of its appellation,” observed one Black congressman who represented South Carolina’s 7th District.

Racial gerrymandering in recent times

The political gains of the Civil Rights Movement of the 1950s and 1960s, sometimes called the “Second Reconstruction,” were made tangible by the 1965 Voting Rights Act. The law revived the postbellum 15th Amendment, which prevented states from creating voting restrictions based on race. That amendment had been made a dead letter by Jim Crow state legislatures and an acquiescent Supreme Court.

In contrast to the post-Civil War struggle, the Second Reconstruction had the firm support of the federal courts. The Supreme Court affirmed the principal of “one person, one vote” in its 1962 Baker v. Carr and 1964 Reynolds v. Sims decisions – upending the Solid South’s landscape of political districts that had long been marked by sparsely populated Democratic districts controlled by rural elites.

The Voting Rights Act gave the federal government oversight over any changes in voting policy that might affect historically marginalized groups. Since passage of the 1965 law and its subsequent revisions, racial gerrymandering has largely served the purpose of creating districts that preserve and amplify the political representation of historically marginalized groups.

This generational work may soon be undone by the current Supreme Court. The court, which heard oral arguments in the Louisiana case in October 2025, will release its decision by the end of June 2026.

The Conversation

Robert D. Bland does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. The Supreme Court may soon diminish Black political power, undoing generations of gains – https://theconversation.com/the-supreme-court-may-soon-diminish-black-political-power-undoing-generations-of-gains-274179

Confused by the new dietary guidelines? Focus on these simple, evidence-based shifts to lower your chronic disease risk

Source: The Conversation – USA (3) – By Michael I Goran, Professor of Pediatrics and Vice Chair for Research, University of Southern California

Consuming less highly processed foods and sugary drinks and more whole grains can meaningfully improve your health. fizkes/iStock via Getty Images Plus

The Dietary Guidelines for Americans aim to translate the most up-to-date nutrition science into practical advice for the public as well as to guide federal policy for programs such as school lunches.

But the newest version of the guidelines, released on Jan. 7, 2026, seems to be spurring more confusion than clarity about what people should be eating.

I’ve been studying nutrition and chronic disease for over 35 years, and in 2020 I wrote “Sugarproof,” a book about reducing consumption of added sugars to improve health. I served as a scientific adviser for the new guidelines.

I chose to participate in this process, despite its accelerated and sometimes controversial nature, for two reasons. First, I wanted to help ensure the review was conducted with scientific rigor. And second, federal health officials prioritized examining areas where the evidence has become especially strong – particularly food processing, added sugars and sugary beverages, which closely aligns with my research.

My role, along with colleagues, was to review and synthesize that evidence and help clarify where the science is strongest and most consistent.

The latest dietary guidelines, published on Jan. 7, 2026, have received mixed reviews from nutrition experts.

What’s different in the new dietary guidelines?

The dietary guidelines, first published in 1980, are updated every five years. The newest version differs from the previous versions in a few key ways.

For one thing, the new report is shorter, at nine pages rather than 400. It offers simpler advice directly to the public, whereas previous guidelines were more directed at policymakers and nutrition experts.

Also, the new guidelines reflect an important paradigm shift in defining a healthy diet. For the past half-century, dietary advice has been shaped by a focus on general dietary patterns and targets for individual nutrients, such as protein, fat and carbohydrate. The new guidelines instead emphasize overall diet quality.

Some health and nutrition experts have criticized specific aspects of the guidelines, such as how the current administration developed them, or how they address saturated fat, beef, dairy, protein and alcohol intake. These points have dominated the public discourse. But while some of them are valid, they risk overshadowing the strongest, least controversial and most actionable conclusions from the scientific evidence.

What we found in our scientific assessment was that just a few straightforward changes to your diet – specifically, reducing highly processed foods and sugary drinks, and increasing whole grains – can meaningfully improve your health.

What the evidence actually shows

My research assistants and I evaluated the conclusions of studies on consuming sugar, highly processed foods and whole grains, and assessed how well they were conducted and how likely they were to be biased. We graded the overall quality of the findings as low, moderate or high based on standardized criteria such as their consistency and plausibility.

We found moderate to high quality evidence that people who eat higher amounts of processed foods have a higher risk of developing Type 2 diabetes, cardiovascular disease, dementia and death from any cause.

Similarly, we found moderately solid evidence that people who drink more sugar-sweetened beverages have a higher risk of obesity and Type 2 diabetes, as well as quite conclusive evidence that children who drink fruit juice have a higher risk of obesity. And consuming more beverages containing artificial sweeteners raises the risk of death from any cause and Alzheimer’s disease, based on moderately good evidence.

Whole grains, on the other hand, have a protective effect on health. We found high-quality evidence that people who eat more whole grains have a lower risk of cardiovascular disease and death from any cause. People who consume more dietary fiber, which is abundant in whole grains, have a lower risk of Type 2 diabetes and death from any cause, based on moderate-quality research.

According to the research we evaluated, it’s these aspects – too much highly processed foods and sweetened beverages, and too little whole grain foods – that are significantly contributing to the epidemic of chronic diseases such as obesity, Type 2 diabetes and heart disease in this country – and not protein, beef or dairy intake.

Different types of food on rustic wooden table
Evidence suggests that people who eat higher amounts of processed foods have a higher risk of developing Type 2 diabetes, cardiovascular disease, dementia and death from any cause.
fcafotodigital/E+ via Getty Images

From scientific evidence to guidelines

Our report was the first one to recommend that the guidelines explicitly mention decreasing consumption of highly processed foods. Overall, though, research on the negative health effects of sugar and processed foods and the beneficial effects of whole grains has been building for many years and has been noted in previous reports.

On the other hand, research on how strongly protein, red meat, saturated fat and dairy are linked with chronic disease risk is much less conclusive. Yet the 2025 guidelines encourage increasing consumption of those foods – a change from previous versions.

The inverted pyramid imagery used to represent the 2025 guidelines also emphasizes protein – specifically, meat and dairy – by putting these foods in a highly prominent spot in the top left corner of the image. Whole grains sit at the very bottom; and except for milk, beverages are not represented.

Scientific advisers were not involved in designing the image.

Making small changes that can improve your health

An important point we encountered repeatedly in reviewing the research was that even small dietary changes could meaningfully lower people’s chronic disease risks.

For example, consuming just 10% fewer calories per day from highly processed foods could lower the risk of diabetes by 14%, according to one of the lead studies we relied on for the evidence review. Another study showed that eating one less serving of highly processed foods per day lowers the risk of heart disease by 4%.

You can achieve that simply by switching from a highly processed packaged bread to one with fewer ingredients or replacing one fast-food meal per week with a simple home-cooked meal. Or, switch your preferred brands of daily staples such as tomato sauce, yogurt, salad dressing, crackers and nut butter to ones that have fewer ingredients like added sugars, sweeteners, emulsifiers and preservatives.

Cutting down on sugary beverages – for example, soda, sweet teas, juices and energy drinks – had an equally dramatic effect. Simply drinking the equivalent of one can less per day lowers the risk of diabetes by 26% and the risk of heart disease by 14%.

And eating just one additional serving of whole grains per day – say, replacing packaged bread with whole grain bread – results in an 18% lower risk of diabetes and a 13% lower risk of death from all causes combined.

How to adopt ‘kitchen processing’

Another way to make these improvements is to take basic elements of food processing back from manufacturers and return them to your own kitchen – what I call “kitchen processing.” Humans have always processed food by chopping, cooking, fermenting, drying or freezing. The problem with highly processed foods isn’t just the industrial processing that transforms the chemical structure of natural ingredients, but also what chemicals are added to improve taste and shelf life.

Kitchen processing, though, can instead be optimized for health and for your household’s flavor preferences – and you can easily do it without cooking from scratch. Here are some simple examples:

  • Instead of flavored yogurts, buy plain yogurt and add your favorite fruit or some homemade simple fruit compote.

  • Instead of sugary or diet beverages, use a squeeze of citrus or even a splash of juice to flavor plain sparkling water.

  • Start with a plain whole grain breakfast cereal and add your own favorite source of fiber and/or fruit.

  • Instead of packaged “energy bars” make your own preferred mixture of nuts, seeds and dried fruit.

  • Instead of bottled salad dressing, make a simple one at home with olive oil, vinegar or lemon juice, a dab of mustard and other flavorings of choice, such as garlic, herbs, or honey.

You can adapt this way of thinking to the foods you eat most often by making similar types of swaps. They may seem small, but they will build over time and have an outsized effect on your health.

The Conversation

Michael I Goran receives funding from the National Institutes of Health and the Dr Robert C and Veronica Atkins Foundation. He is a scientific advisor to Eat Real (non-profit promoting better school meals) and has previously served as a scientific advisor to Bobbi (infant formula) and Begin Health (infant probiotics).

ref. Confused by the new dietary guidelines? Focus on these simple, evidence-based shifts to lower your chronic disease risk – https://theconversation.com/confused-by-the-new-dietary-guidelines-focus-on-these-simple-evidence-based-shifts-to-lower-your-chronic-disease-risk-273701

Private credit rating agencies shape Africa’s access to debt. Better oversight is needed

Source: The Conversation – Africa – By Daniel Cash, Senior Fellow, United Nations University; Aston University

Africa’s development finance challenge has reached a critical point. Mounting debt pressure is squeezing fiscal space. And essential needs in infrastructure, health and education remain unmet. The continent’s governments urgently need affordable access to international capital markets. Yet many continue to face borrowing costs that make development finance unviable.

Sovereign credit ratings – the assessments that determine how financial markets price a country’s risk – play a central role in this dynamic. These judgements about a government’s ability and willingness to repay debt are made by just three main agencies – S&P Global, Moody’s and Fitch. The grades they assign, ranging from investment grade to speculative or default, directly influence the interest rates governments pay when they borrow.

Within this system, the stakes for African economies are extremely high. Borrowing costs rise sharply once countries fall below investment grade. And when debt service consumes large shares of budgets, less remains for schools, hospitals or climate adaptation. Many institutional investors also operate under mandates restricting them to investment-grade bonds.




Read more:
Africa’s development banks are being undermined: the continent will pay the price


Countries rated below this threshold are excluded from large pools of capital. In practice it means that credit ratings shape the cost of borrowing, as well as whether borrowing is possible at all.

I am a researcher who has examined how sovereign credit ratings operate within the international financial system. And I’ve followed debates about their role in development finance. Much of the criticism directed at the agencies has focused on: their distance from the countries they assess; the suitability of some analytical approaches; and the challenges of applying standardised models across different economic contexts.

Less attention has been paid to the position ratings now occupy within the global financial architecture. Credit rating agencies are private companies that assess the likelihood that governments and firms will repay their debts. They sell these assessments to investors, banks and financial institutions, rather than working for governments or international organisations. But their assessments have become embedded in regulation, investment mandates and policy processes in ways that shape public outcomes.

This has given ratings a governance-like influence over access to finance, borrowing costs and fiscal space. In practice, ratings help determine how expensive it is for governments to borrow. This determines how much room they have to spend on public priorities like health, education, and infrastructure. Yet, credit rating agencies were not created to play this role. They emerged as private firms in the early 1900s to provide information to investors. The frameworks for coordinating and overseeing their wider public impact – which grew long after they were established – developed gradually and unevenly over time.

The question isn’t whether ratings should be replaced. Rather, it’s how this influence is understood and managed.

Beyond the bias versus capacity debate

Discussions about Africa’s sovereign ratings often focus on two explanations. One is that African economies are systemically underrated, with critics pointing to rapid downgrades and assessments that appear harsher than those applied to comparable countries elsewhere.

Factors often cited include the location of analytical teams in advanced economies, limited exposure to domestic policy processes in the global south, and incentive structures shaped by closer engagement with regulators and market actors in major financial centres.

The other explanation emphasises macroeconomic fundamentals, the basic economic conditions that shape a government’s ability to service debt, such as growth prospects, export earnings, institutional strength and fiscal buffers. When these are weaker or more volatile, borrowing costs tend to be more sensitive to global shocks.

Both perspectives have merit. Yet neither fully explains a persistent pattern: governments often undertake significant reforms, sometimes at high political and social costs, but changes in ratings can lag well behind those efforts. During that period, borrowing costs remain high and market access constrained. It is this gap between reform and recognition that points to a deeper structural issue in how credit ratings operate within the global financial system.

Design by default

Credit ratings began as a commercial information service for investors. Over several decades, from the 1970s to the 2000s, they became embedded in financial regulation. United States regulators first incorporated ratings into capital rules in 1975 as benchmarks for determining risk charges. The European Union followed in the late 1980s and 1990s. Key international bodies followed.

This process was incremental, not the result of deliberate public design. Ratings were adopted because they were available, standardised and widely recognised. It’s argued that private sector reliance on ratings typically followed their incorporation into public regulation. But in fact markets relied informally on credit rating assessments long before regulators formalised their use.

By the late 1990s, ratings had become deeply woven into how financial markets function. The result was that formal regulatory reliance increased until ratings became essential for distinguishing creditworthiness. This, some have argued, may have encouraged reliance on ratings at the expense of independent risk assessment.

Today, sovereign credit ratings influence which countries can access development finance, at what cost, and on what terms. They shape the fiscal options available to governments, and therefore the policy space for pursuing development goals.

Yet ratings agencies remain private firms, operating under commercial incentives. They developed outside the multilateral system and were not originally designed for a governance role. The power they wield is real. But the mechanisms for coordinating that power over public development objectives emerged later and separately. This created a governance function without dedicated coordination or oversight structures.

Designing the missing layer

African countries have initiated reform efforts to address their development finance challenge. For instance, some work with credit rating agencies to improve data quality and strengthen institutions. But these efforts don’t always translate into timely changes in assessments.

Part of the difficulty lies in shared information constraints. The link between fiscal policy actions and market perception remains complex. Governments need ways to credibly signal reform. Agencies need reliable mechanisms to verify change. And investors need confidence that assessments reflect current conditions rather than outdated assumptions.




Read more:
Africa’s new credit rating agency could change the rules of the game. Here’s how


While greater transparency can help, public debt data remains fragmented across databases and institutions.

A critical missing element in past reform efforts has been coordination infrastructure: dialogue platforms and credibility mechanisms that allow complex information to flow reliably between governments, agencies, investors and multilateral institutions.

Evidence suggests that external validation can help reforms gain market recognition. In practice, this points to the need for more structured interaction between governments, rating agencies, development partners and regional credit rating agencies around data, policy commitments and reform trajectories.

One option is the Financing for Development process. This is a multistakeholder forum coordinated by the United Nations that negotiates how the global financial system should support sustainable development. Addressing how credit ratings function within the financial system is a natural extension of this process.

Building a coordination layer need not mean replacing ratings. Or shifting them into the public sector. It means creating the transparency, dialogue and accountability structures that help any system function more effectively.

Recognising this reality helps explain how development finance actually works. As debt pressures rise and climate adaptation costs grow, putting this governance layer in place is now critical to safeguarding development outcomes in Africa.

The Conversation

Daniel Cash is affiliated with UN University Centre for Policy Research.

ref. Private credit rating agencies shape Africa’s access to debt. Better oversight is needed – https://theconversation.com/private-credit-rating-agencies-shape-africas-access-to-debt-better-oversight-is-needed-274858

Data centers told to pitch in as storms and cold weather boost power demand

Source: The Conversation – USA (2) – By Nikki Luke, Assistant Professor of Human Geography, University of Tennessee

During winter storms, physical damage to wires and high demand for heating put pressure on the electrical grid. Brett Carlsen/Getty Images

As Winter Storm Fern swept across the United States in late January 2026, bringing ice, snow and freezing temperatures, it left more than a million people without power, mostly in the Southeast.

Scrambling to meet higher than average demand, PJM, the nonprofit company that operates the grid serving much of the mid-Atlantic U.S., asked for federal permission to generate more power, even if it caused high levels of air pollution from burning relatively dirty fuels.

Energy Secretary Chris Wright agreed and took another step, too. He authorized PJM and ERCOT – the company that manages the Texas power grid – as well as Duke Energy, a major electricity supplier in the Southeast, to tell data centers and other large power-consuming businesses to turn on their backup generators.

The goal was to make sure there was enough power available to serve customers as the storm hit. Generally, these facilities power themselves and do not send power back to the grid. But Wright explained that their “industrial diesel generators” could “generate 35 gigawatts of power, or enough electricity to power many millions of homes.”

We are scholars of the electricity industry who live and work in the Southeast. In the wake of Winter Storm Fern, we see opportunities to power data centers with less pollution while helping communities prepare for, get through and recover from winter storms.

A close-up of a rack of electronics.
The electronics in data centers consume large amounts of electricity.
RJ Sangosti/MediaNews Group/The Denver Post via Getty Images

Data centers use enormous quantities of energy

Before Wright’s order, it was hard to say whether data centers would reduce the amount of electricity they take from the grid during storms or other emergencies.

This is a pressing question, because data centers’ power demands to support generative artificial intelligence are already driving up electricity prices in congested grids like PJM’s.

And data centers are expected to need only more power. Estimates vary widely, but the Lawrence Berkeley National Lab anticipates that the share of electricity production in the U.S. used by data centers could spike from 4.4% in 2023 to between 6.7% and 12% by 2028. PJM expects a peak load growth of 32 gigawatts by 2030 – enough power to supply 30 million new homes, but nearly all going to new data centers. PJM’s job is to coordinate that energy – and figure out how much the public, or others, should pay to supply it.

The race to build new data centers and find the electricity to power them has sparked enormous public backlash about how data centers will inflate household energy costs. Other concerns are that power-hungry data centers fed by natural gas generators can hurt air quality, consume water and intensify climate damage. Many data centers are located, or proposed, in communities already burdened by high levels of pollution.

Local ordinances, regulations created by state utility commissions and proposed federal laws have tried to protect ratepayers from price hikes and require data centers to pay for the transmission and generation infrastructure they need.

Always-on connections?

In addition to placing an increasing burden on the grid, many data centers have asked utility companies for power connections that are active 99.999% of the time.

But since the 1970s, utilities have encouraged “demand response” programs, in which large power users agree to reduce their demand during peak times like Winter Storm Fern. In return, utilities offer financial incentives such as bill credits for participation.

Over the years, demand response programs have helped utility companies and power grid managers lower electricity demand at peak times in summer and winter. The proliferation of smart meters allows residential customers and smaller businesses to participate in these efforts as well. When aggregated with rooftop solar, batteries and electric vehicles, these distributed energy resources can be dispatched as “virtual power plants.”

A different approach

The terms of data center agreements with local governments and utilities often aren’t available to the public. That makes it hard to determine whether data centers could or would temporarily reduce their power use.

In some cases, uninterrupted access to power is necessary to maintain critical data systems, such as medical records, bank accounts and airline reservation systems.

Yet, data center demand has spiked with the AI boom, and developers have increasingly been willing to consider demand response. In August 2025, Google announced new agreements with Indiana Michigan Power and the Tennessee Valley Authority to provide “data center demand response by targeting machine learning workloads,” shifting “non-urgent compute tasks” away from times when the grid is strained. Several new companies have also been founded specifically to help AI data centers shift workloads and even use in-house battery storage to temporarily move data centers’ power use off the grid during power shortages.

An aerial view of metal equipment and wires with a city skyline in the background.
Large amounts of power move through parts of the U.S. electricity grid.
Joe Raedle/Getty Images

Flexibility for the future

One study has found that if data centers would commit to using power flexibly, an additional 100 gigawatts of capacity – the amount that would power around 70 million households – could be added to the grid without adding new generation and transmission.

In another instance, researchers demonstrated how data centers could invest in offsite generation through virtual power plants to meet their generation needs. Installing solar panels with battery storage at businesses and homes can boost available electricity more quickly and cheaply than building a new full-size power plant. Virtual power plants also provide flexibility as grid operators can tap into batteries, shift thermostats or shut down appliances in periods of peak demand. These projects can also benefit the buildings where they are hosted.

Distributed energy generation and storage, alongside winterizing power lines and using renewables, are key ways to help keep the lights on during and after winter storms.

Those efforts can make a big difference in places like Nashville, Tennessee, where more than 230,000 customers were without power at the peak of outages during Fern, not because there wasn’t enough electricity for their homes but because their power lines were down.

The future of AI is uncertain. Analysts caution that the AI industry may prove to be a speculative bubble: If demand flatlines, they say, electricity customers may end up paying for grid improvements and new generation built to meet needs that would not actually exist.

Onsite diesel generators are an emergency solution for large users such as data centers to reduce strain on the grid. Yet, this is not a long-term solution to winter storms. Instead, if data centers, utilities, regulators and grid operators are willing to also consider offsite distributed energy to meet electricity demand, then their investments could help keep energy prices down, reduce air pollution and harm to the climate, and help everyone stay powered up during summer heat and winter cold.

The Conversation

Nikki Luke is a fellow at the Climate and Community Institute. She receives funding from the Alfred P. Sloan Foundation. She previously worked at the U.S. Department of Energy.

Conor Harrison receives funding from Alfred P. Sloan Foundation and has previously received funding from the U.S. National Science Foundation.

ref. Data centers told to pitch in as storms and cold weather boost power demand – https://theconversation.com/data-centers-told-to-pitch-in-as-storms-and-cold-weather-boost-power-demand-274604

Climate change threatens the Winter Olympics’ future – and even snowmaking has limits for saving the Games

Source: The Conversation – USA (2) – By Steven R. Fassnacht, Professor of Snow Hydrology, Colorado State University

Italy’s Predazzo Ski Jumping Stadium, which is hosting events for the 2026 Winter Olympics, needed snowmaking machines for the Italian National Championship Open on Dec. 23, 2025. Mattia Ozbot/Getty Images

Watching the Winter Olympics is an adrenaline rush as athletes fly down snow-covered ski slopes, luge tracks and over the ice at breakneck speeds and with grace.

When the first Olympic Winter Games were held in Chamonix, France, in 1924, all 16 events took place outdoors. The athletes relied on natural snow for ski runs and freezing temperatures for ice rinks.

Two skaters on ice outside with mountains in the background. They are posing as if gliding together.
Sonja Henie, left, and Gilles Grafstrom at the Olympic Winter Games in Chamonix, France, in 1924.
The Associated Press

Nearly a century later, in 2022, the world watched skiers race down runs of 100% human-made snow near Beijing. Luge tracks and ski jumps have their own refrigeration, and four of the original events are now held indoors: figure skaters, speed skaters, curlers and hockey teams all compete in climate-controlled buildings.

Innovation made the 2022 Winter Games possible in Beijing. Ahead of the 2026 Winter Olympics in northern Italy, where snowfall was below average for the start of the season, officials had large lakes built near major venues to provide enough water for snowmaking. But snowmaking can go only so far in a warming climate.

As global temperatures rise, what will the Winter Games look like in another century? Will they be possible, even with innovations?

Former host cities that would be too warm

The average daytime temperature of Winter Games host cities in February has increased steadily since those first events in Chamonix, rising from 33 degrees Fahrenheit (0.4 Celsius) in the 1920s-1950s to 46 F (7.8 C) in the early 21st century.

In a recent study, scientists looked at the venues of 19 past Winter Olympics to see how each might hold up under future climate change.

A cross-country skier falls in front of another during a race. The second skier has his mouth open as if shouting.
Human-made snow was used to augment trails at the Sochi Games in Russia in 2014. Some athletes complained that it made the trails icier and more dangerous.
AP Photo/Dmitry Lovetsky

They found that by midcentury, four former host cities – Chamonix; Sochi, Russia; Grenoble, France; and Garmisch-Partenkirchen, Germany – would no longer have a reliable climate for hosting the Games, even under the United Nations’ best-case scenario for climate change, which assumes the world quickly cuts its greenhouse gas emissions. If the world continues burning fossil fuels at high rates, Squaw Valley, California, and Vancouver, British Columbia, would join that list of no longer being a reliable climate for hosting the Winter Games.

By the 2080s, the scientists found, the climates in 12 of 22 former venues would be too unreliable to host the Winter Olympics’ outdoor events; among them were Turin, Italy; Nagano, Japan; and Innsbruck, Austria.

In 2026, there are now five weeks between the Winter Olympics and the Paralympics, which last through mid-March. Host countries are responsible for both events, and some venues may increasingly find it difficult to have enough snow on the ground, even with snowmaking capabilities, as snow seasons shorten.

Ideal snowmaking conditions today require a dewpoint temperature – the combination of coldness and humidity – of around 28 F (-2 C) or less. More moisture in the air melts snow and ice at colder temperatures, which affects snow on ski slopes and ice on bobsled, skeleton and luge tracks.

Stark white lines etched on a swath of brown mountains delineate ski routes and bobsled course.
A satellite view clearly shows the absence of natural snow during the 2022 Winter Olympics. Beijing’s bid to host the Winter Games had explained how extensively it would rely on snowmaking.
Joshua Stevens/NASA Earth Observatory
A gondola passes by with dark ground below and white ski slopes behind it.
The finish area of the Alpine ski venue at the 2022 Winter Olympics was white because of machine-made snow.
AP Photo/Robert F. Bukaty

As Colorado snow and sustainability scientists and also avid skiers, we’ve been watching the developments and studying the climate impact on the mountains and winter sports we love.

Conditions vary by location and year to year

The Earth’s climate will be warmer overall in the coming decades. Warmer air can mean more winter rain, particularly at lower elevations. Around the globe, snow has been covering less area. Low snowfall and warm temperature made the start to the 2025-26 winter season particularly poor for Colorado’s ski resorts.

However, local changes vary. For example, in northern Colorado, the amount of snow has decreased since the 1970s, but the decline has mostly been at higher elevations.

Several machines pump out sprays of snow across a slope.
Snow cannons spray machine-made snow on a ski slope ahead of the 2026 Winter Olympics.
Mattia Ozbot/Getty Images

A future climate may also be more humid, which affects snowmaking and could affect bobsled, luge and skeleton tracks.

Of the 16 Winter Games sports today, half are affected by temperature and snow: Alpine skiing, biathlon, cross-country skiing, freestyle skiing, Nordic combined, ski jumping, ski mountaineering and snowboarding. And three are affected by temperature and humidity: bobsled, luge and skeleton.

Technology also changes

Developments in technology have helped the Winter Games adapt to some changes over the past century.

Hockey moved indoors, followed by skating. Luge and bobsled tracks were refrigerated in the 1960s. The Lake Placid Winter Games in 1980 in New York used snowmaking to augment natural snow on the ski slopes.

Today, indoor skiing facilities make skiing possible year-round. Ski Dubai, open since 2005, has five ski runs on a hill the height of a 25-story building inside a resort attached to a shopping mall.

Resorts are also using snowfarming to collect and store snow. The method is not new, but due to decreased snowfall and increased problems with snowmaking, more ski resorts are keeping leftover snow to be prepared for the next winter.

Two workers pack snow on an indoor ski slope with a sloped ceiling overhead.
Dubai has an indoor ski slope with multiple runs and a chairlift, all part of a shopping mall complex.
AP Photo/Jon Gambrell

But making snow and keeping it cold requires energy and water – and both become issues in a warming world. Water is becoming scarcer in some areas. And energy, if it means more fossil fuel use, further contributes to climate change.

The International Olympic Committee recognizes that the future climate will have a big impact on the Olympics, both winter and summer. It also recognizes the importance of ensuring that the adaptations are sustainable.

The Winter Olympics could become limited to more northerly locations, like Calgary, Alberta, or be pushed to higher elevations.

Summer Games are feeling climate pressure, too

The Summer Games also face challenges. Hot temperatures and high humidity can make competing in the summer difficult, but these sports have more flexibility than winter sports.

For example, changing the timing of typical summer events to another season can help alleviate excessive temperatures. The 2022 World Cup, normally a summer event, was held in November so Qatar could host it.

What makes adaptation more difficult for the Winter Games is the necessity of snow or ice for all of the events.

A snowboarder with 'USA' on her gloves puts her arms out for balance on a run.
Climate change threatens the ideal environments for snowboarders, like U.S. Olympian Hailey Langland, competing here during the women’s snowboard big air final in Beijing in 2022.
AP Photo/Jae C. Hong

Future depends on responses to climate change

In uncertain times, the Olympics offer a way for the world to come together.

People are thrilled by the athletic feats, like Jean-Claude Killy winning all three Alpine skiing events in 1968, and stories of perseverance, like the 1988 Jamaican bobsled team competing beyond all expectations.

The Winter Games’ outdoor sports may look very different in the future. How different will depend heavily on how countries respond to climate change.

This updates an article originally published Feb. 19, 2022, with the 2026 Winter Games.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Climate change threatens the Winter Olympics’ future – and even snowmaking has limits for saving the Games – https://theconversation.com/climate-change-threatens-the-winter-olympics-future-and-even-snowmaking-has-limits-for-saving-the-games-274800

Clergy protests against ICE turned to a classic – and powerful – American playlist

Source: The Conversation – USA (3) – By David W. Stowe, Professor of Religious Studies, Michigan State University

Clergy and community leaders demonstrate outside Minneapolis-St. Paul International Airport on Jan. 23, 2026, amid a surge by federal immigration agents. Brandon Bell/Getty Images

On Jan. 28, 2026, Bruce Springsteen released “Streets of Minneapolis,” a hard-hitting protest against the immigration enforcement surge in the city, including the killings of Renee Good and Alex Pretti. The song is all over social media, and the official video has already been streamed more than 5 million times. It’s hard to remember a time when a major artist has released a song in the midst of a specific political crisis.

Yet some of the most powerful music coming out of Minneapolis is of a much older vintage. Hundreds of clergy from around the country converged on the city in late January to take part in faith-based protests. Many were arrested while blocking a road near the airport. And they have been singing easily recognizable religious songs used during the Civil Rights movement of the 1950s and ‘60s, like “Amazing Grace,” “We Shall Overcome, and ”This Little Light of Mine.“

I have been studying the politics of music and religion for more than 25 years, and I wrote about songs I called “secular spirituals” in my 2004 book, “How Sweet the Sound: Music in the Spiritual Lives of Americans.” Sometimes called “freedom songs,” they were galvanizing more than 60 years ago, and are still in use today.

But why these older songs, and why do they usually come out of the church? There have been many protest movements since the mid-20th century, and they have all produced new music. The freedom songs, though, have a unique staying power in American culture – partly because of their historical associations and partly because of the songs themselves.

‘We Shall Overcome’ was one of several songs at the 1963 March on Washington.

Stronger together

Some of protest music’s power has to do with singing itself. Making music in a group creates a tangible sense of community and collective purpose. Singing is a physical activity; it comes out of our core and helps foster solidarity with fellow singers.

Young activists working in the Deep South during the most violent years of the Civil Rights Movement spoke of the courage that came from singing freedom songs like “We Shall Overcome” in moments of physical danger. In addition to helping quell fear, the songs were unnerving to authorities trying to maintain segregation. “If you have to sing, do you have to sing so loud?” one activist recalled an armed deputy saying.

And when locked up for days in a foul jail, there wasn’t much else to do but sing. When a Birmingham, Alabama, police commissioner released young demonstrators he’d arrested, they recalled him complaining that their singing “made him sick.”

Test of time

Sometimes I ask students if they can think of more recent protest songs that occupy the same place as the freedom songs of the 1960s. There are some well-known candidates: Bob Marley’s “Get Up, Stand Up,” Green Day’s “American Idiot” and Public Enemy’s “Fight the Power,” to name a few. The Black Lives Matter movement alone helped produce several notable songs, including Beyonce’s “Freedom,” Kendrick Lamar’s “Alright and Childish Gambino’s ”This Is America.“

But the older religious songs have advantages for on-the-ground protests. They have been around for a long time, meaning that more people have had more chances to learn them. Protesters typically don’t struggle to learn or remember the tune. As iconic church songs that have crossed over into secular spirituals, they were written to be memorable and singable, crowd-tested for at least a couple of generations. They are easily adaptable, so protesters can craft new verses for their cause – as when civil rights activists added “We are not afraid” to the lyrics of “We shall overcome.”

A black-and-white photo shows a row of seated women inside a van or small space clapping as they sing.
Protesters sing at a civil rights demonstration in New York in 1963.
Bettmann Archive/Getty Images

And freedom songs link the current protesters to one of the best-known – and by some measures, most successful – protest movements of the past century. They create bonds of solidarity not just among those singing them in Minneapolis, but with protesters and activists of generations past.

These religious songs are associated with nonviolence, an important value in a citizen movement protesting violence committed by federal law enforcement. And for many activists, including the clergy who poured into Minneapolis, religious values are central to their willingness to stand up for citizens targeted by ICE.

Deep roots

The best-known secular spirituals actually predate the Civil Rights Movement. “We Shall Overcome” first appeared in written form in 1900 as “I’ll Overcome Some Day,” by the Methodist minister Charles Tindley, though the words and tunes are different. It was sung by striking Black tobacco workers in South Carolina in 1945 and made its way to the Highlander Folk School in Tennessee, an integrated training center for labor organizers and social justice activists.

It then came to the attention of iconic folk singer Pete Seeger, who changed some words and gave it wide exposure. “We Shall Overcome” has been sung everywhere from the 1963 March on Washington and anti-apartheid rallies in South Africa to South Korea, Lebanon and Northern Ireland.

“Amazing Grace” has an even longer history, dating back to a hymn written by John Newton: an 18th-century ship captain in the slave trade who later became an Anglican clergyman and penned an essay against slavery. Pioneering American gospel singer Mahalia Jackson recorded it in 1947 and sang it regularly during the 1960s.

Mahalia Jackson sings the Gospel hymn ‘How I Got Over’ at the March on Washington.

Firmly rooted in Protestant Christian theology, the song crossed over into a more secular audience through a 1970 cover version by folk singer Judy Collins, which reached No. 15 on the Billboard charts. During Mississippi Freedom Summer of 1964, an initiative to register Black voters, Collins heard the legendary organizer Fannie Lou Hamer singing “Amazing Grace,” a song she remembered from her Methodist childhood.

Opera star Jessye Norman sang it at Nelson Mandela’s 70th birthday tribute in London, and bagpipers played it at a 2002 interfaith service near Ground Zero to commemorate victims of 9/11.

‘This little light’

Another gospel song used in protests against ICE – “This little light of mine, I’m gonna let it shine” – has similarly murky historical origins and also passed through the Highlander Folk School into the Civil Rights Movement.

It expresses the impulse to be seen and heard, standing up for human rights and contributing to a movement much larger than each individual. But it could also mean letting a light shine on the truth – for example, demonstrators’ phones documenting what happened in the two killings in Minneapolis, contradicting some officials’ claims.

Like the Civil Rights Movement, the protests in Minneapolis involve protecting people of color from violence – as well as, more broadly, protecting immigrants’ and refugees’ legal right to due process. A big difference is that in the 1950s and 1960s, the federal government sometimes intervened to protect people subjected to violence by states and localities. Now, many Minnesotans are trying to protect people in their communities from agents of the federal government.

The Conversation

David W. Stowe does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Clergy protests against ICE turned to a classic – and powerful – American playlist – https://theconversation.com/clergy-protests-against-ice-turned-to-a-classic-and-powerful-american-playlist-274585

Quand les médicaments du quotidien nous rendent malades : mésusages et effets indésirables

Source: The Conversation – in French – By Clément Delage, Maitre de Conférences en Pharmacologie (Faculté de Pharmacie de Paris) – Unité Inserm UMR-S 1144 "Optimisation Thérapeutique en Neuropsychopharmacologie" – Pharmacien Hospitalier (Hôpital Lariboisière, AP-HP), Université Paris Cité

Entre banalisation de l’automédication et méconnaissance des risques, les médicaments le plus utilisés – comme le paracétamol – peuvent être à l’origine d’effets indésirables parfois sévères. Comprendre pourquoi et comment le remède peut devenir poison, c’est poser les bases d’un bon usage du médicament.


Chaque année, le mésusage de médicaments est responsable d’environ 2 760 décès et de 210 000 hospitalisations en France, selon une étude du Réseau français de centres de pharmacovigilance. Cela représente 8,5 % des hospitalisations et 1,5 fois plus de décès que les accidents de la route.

Si la part exacte de l’automédication dans ces chiffres reste difficile à établir, ces données rappellent une réalité souvent négligée. Les effets indésirables ne concernent pas seulement les traitements rares ou complexes, mais aussi les médicaments de tous les jours : paracétamol, ibuprofène, antihistaminiques (des médicaments qui bloquent la production d’histamine responsable de symptômes comme le gonflement, les rougeurs, les démangeaisons, les éternuements, etc. dans les situations d’allergies, ndlr), somnifères, sirops contre le rhume, etc.

Quand le remède devient poison

« Tout est poison, rien n’est poison : seule la dose fait le poison. »
– Paracelse (1493–1541)

Copie présente au Musée du Louvre du portrait perdu de Paracelse par Quentin Metsys
Portrait de Paracelse, d’après Quentin Metsys (1466-1530).
CC BY-NC

Cet adage fondateur de la pharmacie, enseigné dès la première année aux futurs pharmaciens, garde aujourd’hui toute sa pertinence. Paracelse avait compris dès le XVIᵉ siècle qu’une substance pouvait être bénéfique ou toxique selon la dose, la durée et le contexte d’exposition. D’ailleurs, le mot pharmacie dérive du mot grec phármakon qui signifie à la fois « remède » et « poison ».

Le paracétamol, antalgique et antipyrétique (médicament qui combat la douleur et la fièvre, ndlr) largement consommé, est perçu comme inoffensif. Pourtant, il est responsable d’hépatites médicamenteuses aiguës, notamment lors de surdosages accidentels ou d’associations involontaires entre plusieurs spécialités qui en contiennent. En France, la mauvaise utilisation du paracétamol est la première cause de greffe hépatique (c’est-à-dire de foie, ndlr) d’origine médicamenteuse, alerte l’Agence nationale de sécurité du médicament et des produits de santé (ANSM).

L’ibuprofène, également très utilisé pour soulager douleurs et fièvre, peut quant à lui provoquer des ulcères gastriques, des hémorragies digestives ou une insuffisance rénale, s’il est pris à forte dose et de manière prolongée ou avec d’autres traitements agissant sur le rein. Par exemple, associé aux inhibiteurs de l’enzyme de conversion (les premiers médicaments prescrits dans l’hypertension artérielle), il peut déclencher une insuffisance rénale fonctionnelle.

L’aspirine, médicament que l’on trouve encore dans beaucoup d’armoires à pharmacie, fluidifie le sang et peut favoriser les saignements et hémorragies, notamment digestifs. En cas de surdosage très important, il peut même conduire à des troubles de l’équilibre acide-base dans le sang et mener au coma voire au décès s’il n’y a pas de prise en charge rapide.

Ces exemples illustrent un principe fondamental : il n’existe pas de médicaments sans risque. Tous peuvent, dans certaines conditions, provoquer des effets délétères. Dès lors, la question n’est pas de savoir si un médicament est dangereux, mais dans quelles conditions il le devient.

Pourquoi tous les médicaments présentent-ils des effets indésirables ?

Comprendre l’origine des effets indésirables nécessite un détour par la pharmacologie, la science qui étudie le devenir et l’action du médicament dans l’organisme.

Chaque médicament agit en se liant à une cible moléculaire spécifique – le plus souvent un récepteur, une enzyme ou un canal ionique – afin de modifier une fonction biologique. Mais ces substances actives, exogènes à l’organisme, ne sont jamais parfaitement sélectives : elles peuvent interagir avec d’autres cibles, provoquant ainsi des effets indésirables – autrefois appelés effets secondaires.

De plus, la plupart des effets, qu’ils soient bénéfiques ou nocifs, sont dose-dépendant. La relation entre la concentration d’un médicament dans le corps et l’intensité de son effet s’exprime par une courbe dose-effet de forme sigmoïde.

Chaque effet (thérapeutique ou indésirable) a sa propre courbe, et la zone thérapeutique optimale (l’index thérapeutique) est celle où l’effet bénéfique est maximal et la toxicité minimale. C’est cette recherche d’équilibre entre efficacité et sécurité qui fonde la balance bénéfice/risque, notion centrale de toute décision thérapeutique.

Courbe effet/dose d’un médicament

Ainsi, même pour des molécules familières, un écart de posologie peut faire basculer le traitement du côté de la toxicité.

Contre-indications et interactions : quand d’autres facteurs s’en mêlent

Les effets indésirables ne dépendent pas uniquement de la dose. Les susceptibilités individuelles, les interactions médicamenteuses ainsi que des facteurs physiologiques ou pathologiques peuvent favoriser la survenue des effets indésirables.

Chez les personnes atteintes d’insuffisance hépatique (qui concerne le foie, ndlr), par exemple, la dégradation du paracétamol normalement assurée par le foie est ralentie, ce qui favorise son accumulation et augmente le risque de toxicité de ce médicament pour le foie. On parle d’hépatotoxicité.

L’alcool, en agissant sur les mêmes récepteurs cérébraux que les benzodiazépines (on citera, dans cette famille de médicaments, les anxiolytiques tels que le bromazépam/Lexomil ou l’alprazolam/Xanax), potentialise leurs effets sédatifs et de dépression respiratoire (qui correspond à une diminution de la fréquence respiratoire, qui peut alors devenir trop faible pour assurer l’approvisionnement du corps en oxygène). Les courbes d’effet/dose de chacun des composés vont alors s’additionner et déclencher de manière plus rapide et plus puissante l’apparition des effets indésirables.

De même, certains médicaments interagissent entre eux en modifiant leur métabolisme, leur absorption ou leur élimination. Dans ce cas, la courbe effet/dose du premier composé sera déplacée vers la droite ou la gauche par le second composé.

Ces mécanismes expliquent la nécessité de contre-indications, de précautions d’emploi et de limites posologiques strictes, précisées pour chaque médicament dans son autorisation de mise sur le marché. Avant chaque prise de médicament, l’usager doit consulter sa notice dans laquelle sont résumées ces informations indispensables.

Comment les risques médicamenteux sont-ils encadrés ?

Avant sa commercialisation, tout médicament fait l’objet d’une évaluation rigoureuse du rapport bénéfice/risque. En France, cette mission relève de l’ANSM.

L’autorisation de mise sur le marché (AMM) est délivrée après analyse des données précliniques et cliniques, qui déterminent notamment :

  • les indications thérapeutiques ;

  • les doses et durées recommandées ;

  • les contre-indications et interactions connues.

Mais l’évaluation ne s’arrête pas après l’autorisation de mise sur le marché. Dès qu’un médicament est utilisé en vie réelle, il entre dans une phase de pharmacovigilance : un suivi continu des effets indésirables signalés par les professionnels de santé ou les patients eux-mêmes.

Depuis 2020, le portail de signalement des événements sanitaires indésirables permet à chacun de déclarer facilement un effet suspecté, contribuant à la détection précoce de signaux de risque.

Les médicaments les plus à risque ne sont disponibles que sur prescription médicale, car la balance bénéfice/risque doit être évaluée patient par patient, par le médecin. Les autres, accessibles sans ordonnance, restent délivrés exclusivement en pharmacie, où le pharmacien joue un rôle déterminant d’évaluation et de conseil. Cette médiation humaine constitue un maillon essentiel du système de sécurité médicamenteuse.

Prévenir la toxicité médicamenteuse : un enjeu collectif

La prévention des accidents liés à des médicaments repose sur plusieurs niveaux de vigilance.

Au niveau individuel, une acculturation au bon usage du médicament est nécessaire.

Quelques gestes simples réduisent considérablement les risques de surdosage ou d’interactions médicamenteuses :

  • Bien lire la notice avant de prendre un médicament.
  • Ne pas conserver les médicaments obtenus avec une prescription lorsque le traitement est terminé et ne pas les réutiliser sans avis médical.
  • Ne pas partager ses médicaments avec autrui.
  • Ne pas prendre les informations trouvées sur Internet pour des avis médicaux.
  • Ne pas cumuler plusieurs médicaments contenant la même molécule.

Mais la responsabilité ne peut reposer uniquement sur le patient : les médecins ont évidemment un rôle clé d’éducation et d’orientation, mais les pharmaciens également. Ces derniers, en tant que premiers professionnels de santé de proximité, sont les mieux placés pour déceler et prévenir un mésusage.

La promotion du bon usage des médicaments est également le rôle des instances de santé, par la diffusion de messages de prévention, la simplification des notices et la transparence sur les signaux de sécurité permettent de renforcer la confiance du public sans nier les risques. L’amélioration de la pharmacovigilance constitue également un levier majeur de santé publique. À ce titre, elle s’est considérablement renforcée depuis le scandale du Mediator en 2009.

Enfin, il convient d’intégrer à cette vigilance les produits de phytothérapie – les traitements à base de plantes (sous forme de gélules, d’huiles essentielles ou de tisanes) et compléments alimentaires et même certains aliments, dont les interactions avec les traitements classiques sont souvent sous-estimées.

Comme pour les médicaments, la phytothérapie provoque aussi des effets indésirables à forte dose et pourra interagir avec eux. Par exemple, le millepertuis (Hypericum perforatum), plante qu’on retrouve dans des tisanes réputées anxiolytiques, va augmenter le métabolisme et l’élimination de certains médicaments, pouvant les rendre inefficaces.

Un équilibre à reconstruire entre confiance et prudence

Le médicament n’est ni un produit de consommation comme les autres ni un poison à éviter. C’est une arme thérapeutique puissante, qui exige discernement et respect. Sa sécurité repose sur une relation de confiance éclairée entre patients, soignants et institutions. Face à la montée de l’automédication et à la circulation massive d’informations parfois contradictoires, l’enjeu n’est pas de diaboliser le médicament, mais d’en restaurer la compréhension rationnelle.

Bien utilisé, il soigne ; mal utilisé, il blesse. C’est tout le sens du message de Paracelse, encore cinq siècles plus tard.

The Conversation

Clément Delage ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. Quand les médicaments du quotidien nous rendent malades : mésusages et effets indésirables – https://theconversation.com/quand-les-medicaments-du-quotidien-nous-rendent-malades-mesusages-et-effets-indesirables-273656

Minéraux critiques d’Afrique : le cadre du G20 définit les moyens d’en profiter

Source: The Conversation – in French – By Glen Nwaila, Director of the Mining Institute and the African Research Centre for Ore Systems Science; Associate Professor of Geometallurgy and Machine Learning, University of the Witwatersrand

Alors que le monde se tourne vers les énergies propres, les minéraux tels que le lithium, le cobalt et le manganèse sont devenus aussi importants que l’était autrefois le pétrole. L’Afrique détient d’importantes réserves de ces minéraux critiques. Pourtant, ils sont principalement exportés sous forme de matières premières, puis reviennent sous forme de technologies vertes coûteuses fabriquées dans des usines à l’étranger. La présidence sud-africaine du G20 a mis en place un nouveau cadre pour les minéraux critiques. Ce cadre vise à aider les pays africains riches en minéraux à tirer davantage profit de la transformation et de la fabrication locales. Les géoscientifiques Glen Nwaila et Grant Bybee expliquent ce qu’il faut faire pour extraire les minéraux en toute sécurité et transformer les richesses souterraines en valeur économique en Afrique.


Que sont les minéraux critiques et quelle place occupent-ils dans les ressources africaines ?

Le cobalt, le manganèse, le graphite naturel, le cuivre, le nickel, le lithium et le minerai de fer sont tous essentiels à la fabrication de panneaux solaires, d’éoliennes, de batteries pour véhicules électriques et d’autres équipements liés aux énergies vertes.

L’Afrique abrite d’importantes réserves de minéraux critiques. Le continent détient 55 % des gisements mondiaux de cobalt. Il abrite 47,65 % de tout le manganèse mondial et 21,6 % du graphite naturel.




Read more:
Les minerais critiques, des ambitions pour l’Afrique


Environ 5,9 % du cuivre, 5,6 % du nickel, 1 % du lithium et 0,6 % du minerai de fer mondial se trouvent en Afrique.

L’Afrique du Sud possède entre 80 % et 90 % des métaux du groupe du platine dans le monde et plus de 70 % des ressources mondiales en chrome et en manganèse. Ces métaux sont essentiels pour fabriquer les composants pour les technologies d’énergie propre et l’électronique.

L’Agence internationale de l’énergie prévoyait en 2025 que la demande en lithium serait multipliée par cinq entre 2025 et 2040. La demande en graphite et en nickel doublera. Entre 50 % et 60 % de cobalt et d’éléments de terres rares supplémentaires seront nécessaires d’ici 2040. La demande en cuivre augmentera de 30 % au cours de la même période.

Quels sont les principaux défis auxquels sont confrontées ces ressources précieuses ?

Dans beaucoup d’économies africaines, les minéraux critiques sont exportés à l’état brut ou semi-transformés, pour être utilisés dans la production de diverses technologies d’énergie verte. Les pays africains importent ensuite ces technologies, passant ainsi à côté des emplois et des industries qui pourraient être créés s’ils fabriquaient eux-mêmes des composants liés aux énergies vertes.

La transformation des minéraux et éléments critiques en Afrique permettrait de créer environ 2,3 millions d’emplois sur le continent. Cela augmenterait le PIB continental d’environ 12 %. Cela contribuerait à résoudre un problème de chômage chronique. Par exemple, l’Afrique du Sud affiche un taux de chômage de 31,9 %. Chez les jeunes âgés de 15 à 34 ans, le taux de chômage atteint 43,7 %.

Quelles solutions sont proposées ?

Le nouveau cadre pour les minéraux critiques du G20 définit des règles et des normes claires afin de garantir une plus grande valeur ajoutée au niveau local (par exemple en transformant les minéraux là où ils sont extraits au lieu de les expédier à l’état brut). C’est ce qu’on appelle la promotion de « la valorisation locale à la source » ou « la création valeur et la rétention de valeur ».

Le cadre soutient la diffusion de l’exploitation minière, du transport, de la transformation et de la vente dans différents pays.

Cela permettra de réduire la dépendance à l’égard d’un seul pays ou d’une seule entreprise. Cela favorisera également la mise en place de chaînes d’approvisionnement plus fiables, moins susceptibles d’être perturbées.




Read more:
Entre la Chine et les Etats-Unis, l’Afrique doit s’imposer comme l’arbitre des minéraux critiques


Le cadre propose également que l’exploitation minière des minéraux critiques soit soumise à des règles strictes et équitables qui protègent les populations, les économies et l’environnement, conformément aux lois et politiques propres aux pays africains.

Il vise également à créer une carte claire (ou un inventaire) de l’emplacement de tous les minéraux critiques sur le continent, afin que l’exploration (en particulier dans les zones sous-explorées) puisse se faire sans nuire aux communautés ou à l’environnement.




Read more:
Ruée vers l’Afrique pour l’extraction des minéraux essentiels comme le lithium : comment le continent devrait répondre à la demande


Il encourage les nouvelles idées, les nouvelles technologies et les formations afin que les gens puissent acquérir les compétences nécessaires pour travailler dans les industries des énergies vertes.

Bien qu’il s’agisse d’un document volontaire et non contraignant, il est essentiel en tant que guide des meilleures pratiques.

En quoi le rôle des géoscientifiques est-il essentiel ?

Les géosciences influencent la vie quotidienne, ce qui n’est pas évidente pour la plupart des gens. Les hydrogéologues contribuent à garantir que les villes, les fermes et les mines disposent d’une eau fiable et propre sans nuire à l’environnement. Les géophysiciens sont capables de « voir » sous terre à l’aide d’outils spécialisés pour trouver des minéraux. Ils déterminent également les endroits où il est sûr de construire des routes, des tunnels et des centrales électriques, et surveillent les risques naturels tels que les tremblements de terre.

Il existe beaucoup de domaines dans les géosciences. Les géométallurgistes cherchent à déterminer comment traiter plus efficacement les roches extraites, en utilisant moins d’énergie et d’eau et en produisant moins de déchets. Les scientifiques spécialisés dans les géodonnées transforment les images satellites et les données terrestres en cartes qui sont utilisées pour planifier les villes et s’adapter au changement climatique. Les géologues spécialisés dans les ressources estiment la quantité de minéraux ou de métaux précieux pouvant être extraits, ainsi que les risques associés.




Read more:
Les minerais africains sont échangés contre la sécurité : pourquoi c’est une mauvaise idée


Les géologues ingénieurs contribuent à la sécurité des bâtiments, des tunnels, des barrages et des installations de traitement des déchets miniers. Les géologues environnementaux surveillent les sols, l’eau et l’air afin de s’assurer que le développement ne nuit pas aux personnes ni à l’environnement.

Les vastes réserves de minéraux critiques de l’Afrique ne peuvent créer des emplois, favoriser la croissance économique et le développement durable que si les pays disposent d’un nombre suffisant de géoscientifiques bien formés pour les trouver, les extraire et les traiter. Leur expertise permet de transformer les ressources souterraines en véritables opportunités économiques.

L’Afrique continue de former beaucoup de géoscientifiques talentueux. Ceux-ci travaillent dans les chaînes de valeur des minéraux critiques et apportent une contribution précieuse. Cependant, des compétences plus avancées en science des géodonnées, en géométallurgie, en modélisation prédictive et en leadership sont nécessaires. À l’heure actuelle, d’importantes lacunes subsistent en Afrique.

Pour combler ces lacunes, les gouvernements africains, les universités, les partenaires industriels et les collaborateurs internationaux doivent investir de toute urgence dans des programmes d’éducation et de formation ciblés. Ceux-ci devraient se concentrer sur la formation en science des géodonnées avancée, en géométallurgie, en modélisation prédictive, en science des systèmes minéraux et en développement du leadership. Des partenariats doivent être mis en place avec des entreprises privées. Les étudiants doivent participer à des échanges de connaissances internationaux.

Les entreprises minières doivent être incitées à partager leurs connaissances afin que les professionnels africains soient formés pour effectuer eux-mêmes des travaux de haute valeur dans le domaine des géosciences et de l’exploitation minière.

Cela permettrait à l’Afrique non seulement d’extraire, mais aussi d’exploiter pleinement ses richesses minérales souterraines. Cette croissance favoriserait une croissance économique inclusive, la création d’emplois et une transition énergétique juste.

The Conversation

Glen Nwaila reçoit un financement de recherche de la part de l’Open Society Foundations, en collaboration avec le Southern Centre for Inequality Studies de l’université de Wits, afin de soutenir ses travaux sur les minéraux critiques en Afrique.

Grant Bybee does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Minéraux critiques d’Afrique : le cadre du G20 définit les moyens d’en profiter – https://theconversation.com/mineraux-critiques-dafrique-le-cadre-du-g20-definit-les-moyens-den-profiter-274747

Ransomware : qu’est-ce que c’est et pourquoi cela vous concerne ?

Source: The Conversation – in French – By Thembekile Olivia Mayayise, Senior Lecturer, University of the Witwatersrand

Le ransomware ou rançongiciel est un type de logiciel malveillant qui rend les données, le système ou l’appareil de la victime inaccessibles. Il les verrouille et les crypte (en les rendant illisibles) jusqu’à ce qu’une rançon soit payée aux pirates.

Il s’agit de l’une des formes de cyberattaques les plus répandues et les plus destructrices qui touchent les organisations à travers le monde. Un rapport d’Interpol a identifié les ransomwares comme l’une des cybermenaces les plus répandues en Afrique en 2024. L’Afrique du Sud a signalé 12 281 détections et l’Égypte 17 849.

Malgré les efforts mondiaux pour le freiner, le ransomware continue de prospérer, alimenté par l’appât du gain rapide des cybercriminels à la recherche de gains financiers rapides. Dans son rapport du premier trimestre 2025, la société mondiale de cybersécurité Sophos a révélé que 71 % des organisations sud-africaines touchées par des ransomwares ont payé une rançon et récupéré leurs données. Mais le coût total d’une attaque par ransomware est difficile à quantifier. Il va au-delà du paiement de la rançon et inclut les pertes de revenus pendant la période d’indisponibilité du système et les dommages potentiels à la réputation.

Les cybercriminels choisissent souvent des organisations où une interruption de service peut avoir des répercussions importantes sur le public ou les opérations, ce qui augmente la pression pour payer la rançon. Les réseaux électriques, les systèmes de santé, les réseaux de transport et les systèmes financiers en sont des exemples. Lorsque les victimes refusent de payer la rançon, les malfaiteurs menacent souvent de divulguer des informations sensibles ou confidentielles.

L’une des raisons pour lesquelles les ransomwares sont devenus si répandus en Afrique est le retard du continent en matière de cybersécurité. De nombreuses organisations ne disposent pas de ressources dédiées à la cybersécurité, ni des compétences. Il leur manque aussi la sensibilisation, les outils et les infrastructures nécessaires pour se défendre contre les cyberattaques.

Dans cet environnement, les pirates informatiques peuvent opérer relativement facilement. Tous les chefs d’entreprise, en particulier ceux qui supervisent les technologies de l’information et de la communication (TIC) ou gèrent des données sensibles, devraient se poser une question cruciale. Notre organisation peut-elle survivre à une attaque par ransomware ?

Il ne s’agit pas seulement d’une question technique, mais aussi d’une question de gouvernance. Les membres des conseils d’administration et les équipes de direction sont de plus en plus responsables de la gestion des risques et de la cyber-résilience.

En tant que chercheur et expert en gouvernance des technologies de l’information et en cybersécurité, je constate que la région africaine est en train de devenir un foyer majeur pour les cyberattaques. Les organisations doivent être conscientes des risques et prendre des mesures pour les atténuer.

Les attaques par ransomware peuvent être extrêmement coûteuses, et une organisation peut avoir des difficultés à se remettre d’un incident, voire y succomber.

Les faiblesses qui augmentent le risque de ransomware

Le rapport 2025 de l’entreprise de télécommunications Verizon sur les violations de données a révélé que le nombre d’organisations touchées par des attaques par ransomware avait augmenté de 37 % par rapport à l’année précédente. Cela montre à quel point de nombreuses organisations sont mal préparées pour prévenir une attaque.

Un plan de continuité des activités détaille la manière dont une entreprise poursuit ses activités en cas de perturbation. Un plan de reprise après sinistre informatique fait partie du plan de continuité. Ces plans sont essentiels pour assurer la continuité des activités après l’attaque, car les entreprises touchées subissent souvent des temps d’arrêt prolongés, une perte d’accès aux systèmes et aux données, ainsi que de graves perturbations opérationnelles.

Les hackers professionnels vendent en fait des outils de ransomware, ce qui permet aux cybercriminels de lancer plus facilement et de manière plus rentable des attaques sans se soucier de leurs conséquences.

Les pirates peuvent infiltrer les systèmes de différentes manières :

  • contrôles de sécurité faibles, tels que des mots de passe ou des mécanismes d’authentification peu sûrs

  • réseaux non surveillés, où il n’existe pas de systèmes de détection d’intrusion capables de signaler toute activité réseau suspecte

  • erreur humaine, lorsque des employés cliquent par erreur sur des liens dans des e-mails qui contiennent des ransomwares.

Une surveillance insuffisante du réseau peut permettre aux pirates informatiques de rester indétectables suffisamment longtemps pour collecter des données sur les vulnérabilités et identifier les systèmes clés à cibler. Dans de nombreux cas, les employés introduisent à leur insu des logiciels malveillants, des liens ou des pièces jointes provenant d’e-mails de phishing. Le phishing est une attaque d’ingénierie sociale qui utilise diverses techniques de manipulation pour inciter un utilisateur à divulguer des informations sensibles, telles que ses coordonnées bancaires ou ses identifiants de connexion, ou pour le piéger en le poussant à cliquer sur des liens malveillants.

Payer la rançon

Les attaquants exigent généralement un paiement en bitcoins ou autres cryptomonnaies, car ces paiements sont très difficiles à tracer. Le paiement de la rançon n’offre aucune garantie de récupération complète des données ni de protection contre de futures attaques. Selon la société mondiale de cybersécurité Check Point, des groupes de ransomware notoires tels que Medusa ont popularisé les tactiques de double extorsion.

Ces groupes exigent un paiement et menacent de publier les données volées en ligne. Ils utilisent souvent les réseaux sociaux et le dark web (une partie d’Internet accessible uniquement à l’aide d’un logiciel spécial), ce qui leur permet de rester anonymes et introuvables. Leur objectif est d’humilier publiquement les victimes ou de divulguer des informations sensibles, afin de faire pression sur les organisations pour qu’elles se plient à leurs exigences.

Ces violations contribuent également aux escroqueries par hameçonnage, car les adresses e-mail et les identifiants exposés circulent sur Internet, ce qui entraîne davantage de violations de données. Des sites web tels que Have I Been Pwned peuvent vous aider à vérifier si votre adresse e-mail a été compromise lors d’une précédente violation de données.

Renforcer la résilience des organisations

Les organisations doivent renforcer leur cybersécurité de plusieurs manières.

  • Mettre en place des mesures techniques et administratives solides pour assurer la sécurité des données. Il s’agit notamment de contrôles d’accès efficaces, d’outils de surveillance du réseau et de sauvegardes régulières des systèmes et des données.

  • Utiliser des outils qui bloquent les attaques de logiciels malveillants à un stade précoce et émettent des alertes en cas d’activités suspectes. Cela inclut l’utilisation d’une protection solide des terminaux, garantissant que tout appareil connecté au réseau dispose de systèmes de détection d’intrusion qui aident à repérer les activités réseau inhabituelles.

  • Doter le personnel des connaissances et de la vigilance nécessaires pour détecter et prévenir les menaces potentielles.

  • Élaborer, documenter et communiquer un plan d’intervention clair en cas d’incident.

  • Faire appel à des experts externes en cybersécurité ou à des services de sécurité gérés lorsque l’organisation ne dispose pas des compétences ou des capacités nécessaires pour gérer seule la sécurité.

  • Élaborez, maintenez et testez régulièrement des plans de continuité des activités et de reprise après sinistre des TIC.

  • Souscrivez une cyberassurance pour couvrir les risques qui ne peuvent être totalement évités.

Les attaques par ransomware constituent une menace grave et croissante pour les particuliers et les organisations. Elles peuvent entraîner des pertes de données, des pertes financières, des perturbations opérationnelles et une atteinte à la réputation. Aucune mesure de sécurité ne peut garantir une protection totale contre de telles attaques. Mais les mesures décrites ici peuvent vous aider.

The Conversation

Thembekile Olivia Mayayise does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Ransomware : qu’est-ce que c’est et pourquoi cela vous concerne ? – https://theconversation.com/ransomware-quest-ce-que-cest-et-pourquoi-cela-vous-concerne-274459