Source: The Conversation – Canada – By Philip Mai, Co-director and Senior Researcher, Social Media Lab, Toronto Metropolitan University
The Canadian government has reached an agreement with the social media platform TikTok after years of debate over the app’s data practices, particularly those affecting young users. The deal allows TikTok to continue operating in Canada under tighter oversight rather than facing a shutdown.
As social media researchers at the Social Media Lab at Toronto Metropolitan University, we’ve always paid close attention to the state of social media in Canada. We have followed the TikTok ban saga closely since early 2020, when United States President Donald Trump first tried to ban the platform, long before he later came out in favour of keeping it.
While the new agreement does move towards greater oversight of TikTok, major concerns remain. TikTok’s parent company, ByteDance, is based in China and Chinese national security laws can compel companies to co-operate with state authorities. This underlying risk sits beyond the reach of Canada’s safeguards.
The agreement follows a new national security review that reversed an earlier conclusion pointing toward closure of TikTok’s Canadian operations. Instead of a ban, the federal government has chosen a regulatory approach, one that keeps the app available while imposing legally binding conditions. The deal reduces some risks, but it does not resolve deeper questions about ownership, data flows and national security.
So what has TikTok agreed to? And what will the millions of Canadian users, creators, advertisers and cultural groups that rely on the platform notice?
Stronger protections for youth and minors
Under the new rules, TikTok must strengthen its protection of Canadian user data. This includes creating a security “gateway” to control access to that data, adopting privacy-enhancing technologies and allowing independent third-party monitoring to verify how data is handled.
For everyday users, the focus on youth protection is likely to be the most visible change. Stricter age limits could affect livestreaming. Gift features may be more restricted for younger users. Content involving minors is likely to face stricter moderation.
Canadian creators will also feel the impact. Those with audiences largely made up of teenagers may face tighter moderation or additional eligibility checks for certain features and monetization tools. Sponsors may also ask more detailed questions about audience demographics as brands become more cautious about youth-focused content.
Many changes will happen behind the scenes. As TikTok Canada adjusts to the new requirements, its verification processes, advertising tools and moderation systems are expected to become more demanding.
As the government now requires stronger protection of Canadian user data, people who earn money on the platform may encounter extra steps. These may include stricter identity checks, added requirements for business accounts or ad payments and clearer information about where Canadian user data is stored.
Does this make TikTok safer? Compared to what existed before, the agreement does move toward greater oversight. Independent monitoring, if carried out properly, gives the government some visibility into TikTok’s data practices and the commitments are legally binding rather than voluntary.
Canadian data can still leave Canada
Enforcement details are still unclear. The government has said it will appoint an independent monitor, but has not named the monitor, explained how audits will work or detailed what penalties TikTok would face for failing to comply. Without clear consequences, oversight could prove weaker in practice than it appears on paper.
The agreement also stops short of requiring full data localization. Canadian user data does not have to stay entirely within the country. Although technical controls may limit access, data can still move through systems outside Canada. This leaves some exposure to unauthorized access or foreign influence.
Overall, the agreement reflects a compromise. Canada avoided a disruptive ban; TikTok accepted tighter rules to keep operating in a key market. The deal reduces some risks, but it does not resolve deeper questions about ownership, data flows and national security.
Those tensions are likely to resurface as Canada continues to grapple with how to regulate global platforms that play an outsized role in everyday life.
Anatoliy Gruzd receives funding from the Canada Research Chair program (SSHRC).
Philip Mai does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
We associate New Year with deep mid-winter and the tidy date of January 1, but for 600 years between 1155 until 1752 in England and Wales the new year began on 25 March. This day was one of the quarter days that divided the year historically and on which rents and debts were settled. March 25 became the quarter day where annual accounts were finalised. So, around about now, we’d have been preparing to welcome in a new year alongside the warmer weather and spring blooms.
Celebrations were double as the legal and ecclesiastical calendar worked in harmony as March 25 is also Lady Day or the Feast of the Annunciation. Falling exactly nine months before Christmas Day, for Christians it marks when the archangel Gabriel informed Mary that she was shortly to bear a son.
Feast days are normally days of indulgence and merrymaking, but Lady Day normally falls in Lent, a time of abstinence. This meant, for some, Lady Day was a temporary lightening of Lenten restrictions.
Also known as Annunciation Day, Lady Day has sometimes fallen on Good Friday, as it did in 1608. This day is the opposite of a feast day, marking the crucifixion and death of Christ, which is observed through fasting and abstinence. The poet John Donne reflected on this crossover in 1608 in Upon the Annunciation where he saw it as an opportunity to be extra pious:
“Tamely, frail body, abstain today;
today My soul eats twice,
Christ hither and away”
So for Donne, this was a day of fasting and reflection to commemorate both the coming of Christ and his death.
Superstitions
Lady Day has many associated superstitions. An anonymous pamphlet printed in 1721 called When my Lord Falls in my Lady’s Lap, England Beware of a Great Mishap takes its title from an old saying that means that it is unlucky when Lady Day falls on or near Easter Sunday. The author proceeds to run through the many calamities that have happened on such inauspicious occasions.
For instance, it tells of how in 1117 the heir to Henry I, William Adelin was drowned in the sinking of the White Ship along with 300 other souls. The author hasn’t got their facts quite straight here as this disaster happened in November 1120. By Victorian times, this superstition about Lady’s day falling near Easter Sunday was considered old fashioned with The Hampshire Advertiser describing it as a “former ill omen” in its 21 March 1846 edition.
Customs
Lady’s Day is still celebrated in some parts of the UK. In Hampshire, The Tichborne Dole on Lady’s Day dates back to around 1150. Mabella (or Isabella), Lady de Tichborne of Alresford, made a deathbed request that an annual donation of bread, baked with grains from her lands, be made in her memory to the parish poor.
Her rather less charitably minded husband, Sir Roger, agreed on condition that his benevolence was limited to crops from just the land that she could walk around while carrying a single burning log from the fire. According to the legend, the dying Mabella crawled her way around some 23 acres before the flame went out. This area is still known as “The Crawls”.
The Tichborne Dole (1671) by Gillis van Tilborgh. Wikimedia
It’s said Mabella left a curse on the house that if ever the dole was stopped the family line would die out. Specifically, she vowed that a generation of seven sons would be followed by a generation of seven daughters. The dole continued uninterrupted until 1794 and it would seem that Mabella’s curse came to pass when the last male Tichborne had a family of seven daughters. And so, the custom was reinstated.
A film, The Tichborne Curse, was released in 1947. The reinstated Dole is still taking place today. Adults from the parishes of Tichborne and Cheriton are entitled to claim one gallon (2kg) of flour, and children half a gallon each.
Always in April
The dating system in the US, Britain and Ireland changed in 1752 when these countries adopted the Gregorian calendar. Then the legal New Year in these countries became the same as it had been in Scotland for the last century and a half: January 1.
However, it wasn’t just the year start that needed adjusting, as the new calendar was now out by several days. This meant that in England, 11 days were “lost” as Wednesday September 2 1752 was followed by Thursday September 14 1752 in order to right things. The jump must have been very disconcerting if we consider how much the clocks going forward an hour throws us out for a while.
In Britain, the legacy of the old-style dating system lives on in our tax system. Where the new tax year was March 25 (the old New Year) it was moved to April 5, and later to April 6, due to the leapfrog in dates 1752.
This day became Old Lady Day. April 6 day now stood in for the financial aspects of the quarter day, which meant this was the date in which new leases on farms and land began and often farm labouring families moved into new tied housing on that day as they signed new year long contracts. Author Thomas Hardy includes this in his 1891 novel Tess of the d’Urbervilles. Tess is hired on a farm upon “her agreeing to remain till Old Lady-Day”.
So March 25 may be a day that for most goes by with little notice now but it was once a major holiday that marked the beginning of the new year.
Sara Read does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Costas Velis, Lecturer in Resource Efficiency Systems, School of Civil Engineering, University of Leeds; Imperial College London
The world is struggling to deal with ever-growing quantities of waste.
A new World Bank Group report, What a Waste 3.0, shows that more than 2.6 billion tonnes of municipal solid waste (which includes rubbish from households, businesses and street cleaning) were generated in 2022. That figure is projected to rise to 3.9 billion tonnes by 2050. The good news is that the share of waste that is mismanaged is expected to fall over that period, from around 30% to around 20%.
That sounds like progress. But percentages can be misleading. The quantity of mismanaged waste, including plastics, is projected to remain almost unchanged, at around 760 million tonnes. This means that by 2050, enormous quantities of waste will still be openly dumped, burned or otherwise unmanaged, with many households and communities left to deal with it themselves.
This new report, which we contributed to, brings together the most recent publicly accessible municipal waste data from 217 countries and economies (such as the Channel Islands) and 262 cities. It highlights that although waste systems are improving in many places, those gains are being undermined by the growth in the amount of waste generated.
This matters because when waste is not managed properly, the consequences affect human health, the environment and the economy. Poor waste management contributes to air and water pollution, damages ecosystems, increases greenhouse gas emissions and makes cities harder and less pleasant to live in.
One of the clearest examples is open burning. In many developing countries, where formal waste collection remains incomplete or absent, open burning is one of the main ways households and communities “self-manage” their waste. These fires burn at low and uneven temperatures. Combined with a mixed waste stream that can include plastics, organics and other materials, they release a complex cocktail of pollutants that can threaten the health of people living and working nearby.
With new data on self-management, this report shows how waste is actually managed across large parts of the world, especially where formal systems remain weak. Forms of self-management of waste include open dumping, open burning, burying waste in informal pits, dumping into waterways and coastal waters, and some forms of informal recovery such as recycling or composting.
So if the harms of poor waste management are well known, why does the problem persist?
One reason is cost. Municipal waste management is resource intensive. Many countries are still spending far less than is needed to provide universal and reliable services. Our analysis suggests that even basic systems involving collection, transport and disposal tend to cost at least US$40 (£30) to US$45 per tonne in low-income countries. In middle-income countries, basic systems cost roughly US$70 to US$80 per tonne, while in high-income countries costs can exceed US$200 per tonne.
At those cost levels, low-income countries would have needed around 0.78% of their combined GDP in 2022 to achieve universal waste management coverage. Middle-income countries would have needed roughly 0.31% to 0.46% of GDP. Yet reported public spending on solid waste management is less than 0.15% of GDP in about three-quarters of low- and middle-income countries and 0.31% in high income countries.
That financing gap helps explain why waste collection is not comprehensively provided, why open dumping is still common and why so many people are left to manage waste themselves.
Around 2 billion people do not have access to solid waste collection, meaning they have to manage it themselves, often through dumping and open burning, as in Nizamat Fort Campus, West Bengal in India. Biswarup Ganguly, CC BY
The total financial costs are also rising fast. Globally, municipal waste management cost more than US$250 billion in 2022. Under a business-as-usual scenario, that annual cost is projected to reach US$426 billion by 2050.
Shifting the costs
The cost of inaction is higher than these service costs alone suggest. Poor waste management brings wider economic losses, for example through ill health, reduced land values, damaged ecosystems, lost materials and harm to sectors such as tourism, agriculture and fisheries.
The world may not be saving money by underinvesting in waste management. It is shifting the costs elsewhere – onto public health, the environment and future generations.
This is especially important in low- and lower-middle-income countries, where waste generation is rising rapidly, but service coverage and infrastructure are often far below sufficient levels. This report estimates that these countries will require hundreds of billions of dollars in investment over the next 25 years just to expand and improve municipal waste systems. Without faster investment, existing service gaps will widen and the costs of inaction will grow.
The world’s waste crisis cannot be understood only as an environmental problem. It is also a financing, public health, governance and development problem. Better data helps us see that more clearly.
Waste management is improving, but not fast enough. Unless investment and performance accelerate, the amount of mismanaged waste worldwide is unlikely to change, causing harm to public health.
Costas Velis has consulted for UNEP – International Environmental Technology Centre, the Organisation for Economic Co-operation and Development (OECD), EMG, the Resources and Waste Advisory Group (with funds from GIZ), the ICF (with funds from The Pew Charitable Trusts), and MARS Inc. via Imperial Consultants). He receives funding from UK Research Innovation and Global Challenges Research Fund, Grid-Arendal, The World Bank Group via UN Operations and the International Union for Conservation of Nature, and the EU via UK Research Innovation grant agreement. He serves on the steering committee for project STOP by SYSTEMIQ Indonesia; was Chair of the International Solid Waste Association Marine Litter Task Force; is on the policy and innovation forum for the Chartered Institution of Wastes Management; He is member of and served at Steering Committee of the Scientist’s Coalition for an Effective Plastics Treaty, and is the owner and Director of Fuelogy, a small research consultancy registered in the UK that offers scientifically impartial services in solid waste management, resource recovery and the circular economy to sustainability-focused consultancies, non-governmental organisations, and international organisations.
Ed Cook has consulted for: Women in Informal Employment, Globalizing and Organizing (WIEGO), World Bank Group, Julie’s Bicycle, Vision 2025, ICF (funded by Pew Charitable Trust), OHE, WasteAware (funded by GIZ), IUCN (funded by World Bank via UNOPs). He has worked on research projects funded by: Grid Arendal (funded by NORAD), Mars, Eunomia Research and Consulting (funded by The World Bank Group), and ICF (funded by the Pew Charitable Trust). He is a Chartered Waste Manager with the Chartered Institute of Wastes Management in the UK, a member of The Scientists’ Coalition for an Effective Plastics Treaty, and a member of the International Solid Waste Association.
Source: The Conversation – UK – By Antonios Kelarakis, Reader in Polymers ad Nanomaterials, School of Pharmacy and Biomedical Sciences, University of Lancashire
On November 14 1985, a letter announcing the discovery of a superstable species of carbon appeared in the science journal Nature. Even the letter’s title, C₆₀: Buckminsterfullerene, caused a stir among the journal’s scholarly readers.
Molecules are usually named with sterile precision. This one was named after the American architect and futurist Richard Buckminster Fuller (Bucky to his friends), whose geodesic domes had become icons of modern design in the 1950s and 60s.
Fuller’s spherical domes were designed to be lightweight yet strong, with each triangular element distributing stress evenly across a curved framework. C₆₀ was the atomic analogue of these domes, built not from steel struts but carbon atoms – each joined by strong bonds with three of its neighbours to create a tiny spherical cage.
This new allotrope of carbon was so stable and symmetric that it redrew the map of molecular architecture. It kicked off a scientific sprint that led, barely a decade later, to the 1996 Nobel prize in chemistry for English scientist Harold Kroto and his American colleagues Robert Curl and Richard Smalley for their discovery.
Fullerenes (now nicknamed Buckyballs) had always existed on Earth – in candle soot, volcanic emissions and ancient minerals. But their scientific discovery emerged from an attempt to simulate the chemistry of carbon-rich red giant stars.
The discovery opened the era of nanotechnology – the manufacture and manipulation of materials at previously impossibly small scales. But this is not the only way Fuller’s name is remembered in science.
Buckminster Fuller holding a geodesic sphere, the structure he pioneered. Wikimedia, CC BY-NC-ND
Who was Buckminster Fuller?
Few 20th-century figures are as hard to classify as Fuller. He was, at the least, an inventor, designer, engineer, writer, philosopher and futurist. Born in Massachusetts in 1895, his formal education was brief and rather turbulent – he was expelled twice from Harvard University. Yet this did not lessen his ambition to redesign the world.
Fuller could be eccentric and sometimes controversial. His early enterprises frequently failed, yet his charisma and boundless optimism made him a compelling public figure. The result was a remarkable portfolio of inventions and concepts, showcasing bold prototypes and radical ideas.
His earliest geodesic domes were built from lightweight materials, typically steel tubular struts connected in a triangular lattice and clad with acrylic panels. They capitalised on the structural advantage of symmetry: enclosing a vast space with relatively little material and remaining exceptionally strong.
Fuller patented the design in 1951. Despite initial scepticism from some in the architectural establishment, geodesic domes soon found practical applications. The US Marine Corps used them for rapidly deployable radar stations in Arctic conditions.
One of the most famous examples is the giant dome built for the Expo 67 international exposition in the Canadian city of Montreal. Known today as the Montreal Biosphere, the structure became one of the most recognisable symbols of futuristic architecture in the 1960s.
Video: Atlas Pictures.
Alongside his designs, Fuller spent much of his life developing Synergetics, a philosophical-geometric framework exploring how structures and energies interact in nature. At the heart of this work was “ephemeralisation” — a term Fuller coined to describe the process of achieving ever greater results with fewer materials and less energy.
In later life, he became a global intellectual celebrity, delivering thousands of lectures around the world. Fuller captivated audiences with a unique vision of design, technology and planetary stewardship — once delivering a marathon series of lectures entitled “Everything I know”. It ran for 42 hours.
The power of symmetry
Symmetry is among science’s most powerful unifying codes and one of its most versatile interpretive tools. It reveals surprising equivalences between forms that differ in size but not in structure.
In the 1960s, footballs adopted a similar geometry to Fuller’s geodesic dome: a combination of 12 pentagons and 20 hexagons stitched into a resilient mesh to absorb force and rolls with minimal deformation. Indeed, a diagram of a football was used to illustrate the announcement of C₆₀: Buckminsterfullerene.
This series is dedicated to lesser-known, highly influential scientists who have had a powerful influence on the careers and research paths of many others, including the authors of these articles.
A growing family of atom-thin, superstrong materials has emerged since that 1985 Nature letter. These include the tiny-in-diameter but much longer carbon nanotubes in 1991, and the one-atom thick graphene in 2004 – both of which are now widely used in electronics, sensors, composites and energy devices.
When added to polymer composites or metal alloys, these tiny carbon cages strengthen and lighten materials, enhancing performance in everything from aircraft components and solar panels to medical tools including MRI scanners.
Doing more with less
The structure of fullerenes naturally realises Fuller’s principle of ephemeralisation – the ability to do more and more with less and less.
Fuller imagined technological progress as a path toward efficiency, elegance, sustainability and abundance. He applied ephemeralisation across his designs, harnessing science and geometry to achieve maximum performance with minimal resources.
Video: The Wall Street Journal.
Beyond geodesic domes, his innovations included the Dymaxion House – a prefabricated, environmentally efficient home designed for easy mass production and transport – and the Dymaxion Car. Patented in 1933, its streamlined aerodynamic bodywork was designed to carry more passengers while improving both fuel efficiency and top speed.
Fuller also imagined radical solutions for extreme environments. These included the Undersea Island – a submerged base anchored by crisscrossing cables to stay rock steady in storms – and the suspension building system, which inverted the idea of a suspension bridge into an arched dome that created vast interior space with minimal material.
Fuller died in 1983 after a lifetime spent redesigning the world – and reimagining how humanity might live. Two years later, chemistry paid him an unexpected tribute: a perfectly symmetrical carbon molecule was named after him, recognising his lifelong dedication to geometrical efficiency.
In the nanosized Buckyball, Fuller’s aspirational social ideas are encapsulated in a molecule that embodies minimalism, efficiency and intelligent design.
Antonios Kelarakis does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Imagine a man wants to buy a new shirt for work that he plans to wear once a week for at least the next five years. When browsing for options, he finds one shirt from a lower-quality brand priced at £20 and one shirt from a high-quality brand for £50. Which one should he buy?
From his previous experience with the two brands, he knows that if he plans to wear the shirt once a week (so roughly 50 times per year) the lower-quality shirt will last him about a year. After that, he will need to replace it. The high-quality shirt will be good for at least four years. But clearly, the high-quality shirt is also more expensive.
Our theoretical shopper would probably conclude that the high-quality shirt makes more economic sense. Taking into account the purchase price relative to how many times he can wear the shirt, it would cost him only 25 pence for each time he would wear it, compared to the lower-quality shirt at 40 pence.
This is the logic of “cost per wear”. Some fashion blogs and small businesses have started using this concept to make the case for high-quality clothing. The rationale is simple – higher-quality clothing should last longer, making a higher purchase price worthwhile in the long run. Cost per wear is calculated by dividing the garment price by the number of times the consumer expects to wear it.
Essentially, cost per wear works much like unit pricing in supermarkets. These measures already help consumers compare things like the price of a product per 100g, per chocolate bar in a multipack, or per laundry load. But this same logic isn’t yet applied to clothes in a retail setting.
The fashion industry is one of the largest contributors to environmental harm, accounting for up to 8% of the world’s carbon emissions, causing immense water pollution due to textile treatments such as dyeing, and producing millions of tonnes of textile waste annually.
Using cost per wear in shops or online retail spaces could reduce the environmental impact of fashion – the more frequently a garment can be worn, the more efficiently the consumed resources are used. And of course the longer that garment remains in use, the less often it needs to be replaced.
The problem is that most shoppers don’t know how long a garment will last. Without a prompt in stores or online, many consumers do not even consider clothing longevity when buying.
But standardised fabric-testing methods exist already. These methods assess the durability of fabric according to the number of abrasion cycles (that is, the number of rubs against an abrasive surface) it can tolerate before showing signs of wear. This could easily be applied to clothing, allowing retailers to include cost per wear labels alongside the purchase price.
In research I carried out with Lucia Reisch from Cambridge Judge Business School, we tested this idea. In a number of experiments, we showed participants from online panels a lower-quality, cheaper garment (a sweater, for example) and a higher-quality, more expensive version. We then asked which they would prefer.
Fast fashion suddenly wasn’t so affordable
When we included information on the cost per wear for both options – or even just the high-quality option (showing a lower cost per wear compared to a poorer-quality option, or a reference value), participants were more likely to choose the more expensive, high-quality option.
The effect was stronger when participants were shopping for everyday wear over occasion wear, when they could compare the cost per wear between options, and when the cost per wear information was said to be certified by an independent third party. Participants then trusted the information more, and we found that this could outperform a general durability claim made by a brand.
Our studies showed that cost per wear can make cheap fashion suddenly appear more expensive to shoppers – the high-quality options were viewed as better financial investments. And by choosing the more economical, high-quality option, participants were also choosing the greener option.
Cost per wear can increase the perception of affordability for more expensive, high‐quality clothing. But of course many shoppers will still not be able to afford the higher purchase price even though they know it would make more long-term economic sense.
And cost per wear only reflects the durability of an item as one dimension of sustainability. It does not reflect ethical considerations, such as the conditions workers face in the production process, or ecological aspects such as the use of natural or synthetic fibres.
Brands and retailers must also be willing to display cost per wear labels without regulation. High-quality brands may arguably have a greater incentive to do so than fast fashion brands.
However, the concept of cost per wear is still worth pursuing. It can prompt shoppers at the point of purchase to consider a garment’s durability and how often they might wear it. And ideally, it would motivate them to ditch fast fashion and choose greener options – even if just to save money in the long run.
Lisa Eckmann does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Denim is present in practically every country in the world and is widely adopted as one of the most common forms of everyday attire. Its appeal spans generations and social groups: jeans are worn worldwide by those who follow fashion and those who do not, by people seeking to stand out and by those who prefer to blend in. However, many of us have never found the perfect pair.
Although denim has been produced since the 16th century, its association with American culture and durable workwear emerged during the Californian gold rush of the 1850s. It was during this time that Levi’s – now arguably the most recognisable denim brand – was established.
Levi Strauss, an immigrant entrepreneur who arrived in California from Bavaria in the 1850s, opened a dry goods business catering to miners. One of his customers, the tailor Jacob Davis, developed the innovative use of metal rivets to reinforce stress points in work trousers, making them more durable. Strauss and Davis jointly patented this technique, and the Levi’s brand was born.
Blue jeans were originally a seen as symbol of labourers (like the miners) and they also gained a strong association with cowboys. In the decades that followed, denim jeans evolved from practical workwear into one of the most iconic and enduring symbols of global fashion and culture. Film stars such as Marlon Brando and James Dean popularised the jeans and t-shirt look to a young generation in the 1950s. These films personified motorcycle-loving nonconformists, and 1950s Hollywood embraced denim as the garment of rebellion.
Today, the cultural significance of denim jeans has moved beyond early associations with workwear, the cowboy and the teenage rebel, to become a staple worn by people of all ages and backgrounds.
Finding the perfect pair
Denim jeans are often seen as a problematic fashion product in terms of sustainability, because their production leaves a considerable environmental footprint.
Cheap prices on the high street can encourage consumers to treat denim products as short-term items, reducing their lifespan. Cotton, which is commonly the main fabric for denim, is incredibly water intensive; the production of one pair of jeans uses approximately 7,500 litres of water.
Different components involved in the making of a single pair of jeans, such as denim, thread, cotton and buttons, can originate from different countries all over the world. This raises questions regarding the environmental costs involved in the production process. Further issues include that jeans are often not made from single fibre materials and therefore cannot be recycled.
Adding to sustainability concerns, at the consumer level, the perfect pair of jeans remains an elusive concept. But in a recently published book chapter, I explain that the perfect pair of jeans is elusive for a reason. Jeans have to be correct for the individual wearer in terms of comfort, social and personal identity, and also the complexity of fit.
Previous reports have focused on women’s struggle to find jeans that fit and are flattering. The inability to find the perfect pair of jeans may encourage overconsumption, due to repeated purchasing based on poor fit.
My research shows that this is an issue which applies to all genders. The men I spoke to noted how they resented paying a higher price for brands like Levi’s, so spent less by purchasing cheap, high street alternatives. This attitude can lead to overconsumption, as low price points achieved through low-quality production often compromise product longevity.
This demonstrates the perpetuating cycle of fast fashion, driven by cheap, low-quality production, and contradicts the original purpose of jeans of being highly durable and having longevity. The combination of highly environmentally damaging production processes with overconsumption results in even greater environmental harm.
Retailers can make efforts to reduce the trend of overconsumption with better fitting garments. However, fit is a complex issue for retailers as well as consumers. For the retailer, producing jeans in a wide range of sizes and styles is often not cost effective, and complex sizing systems can also confuse the consumer.
Technology could provide future solutions to improving the accuracy of fit. Personalised virtual fitting, made possible through improvements in 3D human shape recognition, could ensure improved fit for the consumer. This would benefit online shoppers, although the technology does remain in its infancy, and is yet to be adopted by major online fashion retailers. Virtual fitting rooms also cannot replicate the feeling of denim next to the skin, so although the fit may be perfect, comfort could be compromised.
Ultimately, the enduring challenge of finding the “perfect pair” of jeans highlights not only the garment’s cultural significance but also the opportunity for the fashion industry – and consumers – to move toward more sustainable, better-fitting and more thoughtfully designed denim for the future.
Rose Marroncelli does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Tony Kushner, James Parkes Professor of Jewish/non-Jewish Relations, University of Southampton
The arson attack on four ambulances in Golders Green early on March 23 has been called “a horrific antisemitic attack” by the prime minister, Keir Starmer.
These ambulances were run for the benefit of both the local Jewish and non-Jewish communities in this district of north London by a charity called Hatzola – meaning “rescue” in Hebrew. As these ambulances played a key supportive role in enabling access to health provisions for the good of all, it is especially shocking – and has further heightened the anxieties of British Jews.
This is a community still reeling after the attack on the Heaton Park Synagogue in Manchester in October 2025 on Yom Kippur, the holiest day of the Jewish religious calendar, which killed two people. And the arson attack is part of a wider international wave of antisemitism, which has included Norway, the US and the Netherlands in the past few weeks. This is not an easy time to be a diaspora Jew.
Those who have carried out the attacks have come from different backgrounds. Many have been influenced by online hate emanating from Isis, and potentially individuals or groups supportive of the hardline Iranian regime. Counter-terror police are investigating whether an Iran-linked group is responsible for the arson. The terror group Harakat Ashab al-Yamin al-Islamiya (The Islamic Movement of the People of the Right Hand) has claimed responsibility for the attack, as well as others in Europe.
These attacks reflect the complex pattern of hostility towards Jews in the UK, which has been through a mixture of domestic and foreign-inspired hatred. In terms of the latter, there are several examples going back a century which can be highlighted.
The most well-known is the Jew hatred spread by Oswald Mosley and his British Union of Fascists (BUF), formed in 1932, which was at least partly stimulated by German Nazism.
Overall, however, there are deep domestic roots of antisemitism since the readmission of the Jews in the 17th century after a 300-year expulsion. But it has rarely resulted in violent attacks – even if it has made life uncomfortable for the Jewish minority in moments of crisis.
Golders Green’s rich history
These roots can be seen in relation to Golders Green which started to develop as a place of Jewish settlement from the first world war onward. While there were some Jews in this then small suburb in the 19th century, there was not much in the way of a formal community.
Pam Fox, the social historian of Golders Green’s Jewish community, states that “Before 1910 there was just a handful of Jews living in the community, but by 1915 … there were 300 households”. Growth continued after the first world war, and in 1922 the first synagogue, Dunstan Road, was opened. Today, the Jewish population is around 8,000 and represents some 40% of the suburb’s population.
Such crude statistics do not reflect the diversity of the Jewish population both past and present. As early as the 1930s, more orthodox Jews, some of them refugees from Nazism, were establishing different forms of worship from Dunstan Road, which was more in the form of mainstream orthodox religiosity. By the second world war, there were at least 14,000 Jewish refugees in north-west London (including Golders Green), who varied from the totally secular, to reform, to the very orthodox.
After the war, there were more influxes of Jewish refugees, including from Egypt, Hungary and later South Africa, as well as second- and third-generation Jews whose origins were from eastern Europe and then the East End of London. While the very orthodox are currently the growing group in Golders Green, it still has an incredibly heterogeneous Jewish population.
For most Jews, the vibrant cultural, social and religious life of Golders Green has made it a very comfortable place to call home. Even so, there has been antisemitism – organised in the form of the BUF and more commonly in the form of more casual prejudice.
In late 1945, the Hampstead Petition Movement aimed to remove all foreign Jews from the wider area, and it had some local support. In the Nazi era, local newspapers, including the Golders Green Times, objected to the alleged bad behaviour of the Jewish refugees who were falsely accused of being unpatriotic and selfish.
Today, the idea of Golders Green being a Jewish suburb ignores the reality that most of its population is not of that background. It also forgets the many types of Jewishness that are articulated there. Such nuances are lost on those carrying out the attack on the ambulances, with their universal usage.
It says much about the times that such distinctions are not made – many people hold all Jews responsible for the actions of a particular Israeli government. Yet in Golders Green as elsewhere, Jews for both political, cultural and religious reasons hold a range of attitudes towards the problems of the Middle East. Ultimately, such attacks are, as local Jewish resident Sam Adler put it, “cynical and cowardly”. If nothing else, as with Manchester, they have also brought communities together in solidarity and resistance to the ugliness of antisemitism.
Tony Kushner does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Even before the US-Israel war on Iran, people in the UK were unusually vulnerable to sudden swings in the cost of energy. Depending how you count it, either 11% or 30% of households are officially energy poor, and already struggled to afford basic needs in times of relative peace.
The government’s fuel poverty strategy for England, published in January 2026, focuses on long-term measures such as home insulation upgrades. But it says little about how to protect vulnerable households quickly in this crisis or in future price shocks.
To reduce the immediate harm, ministers need tools that can be deployed now, not just reforms that may take years to deliver.
Here are three measures that could be deployed right now.
A social tariff
The most effective step would be to discount energy bills for lower-income or vulnerable households – a so-called “social tariff”.
This is often seen as difficult or politically risky. But energy remains one of the few essential services without targeted affordability support. Water and telecoms already enjoy it, and energy should be no different.
In a policy brief we published late last year, we showed that the UK electricity system hits lower-income households hardest and produces “uneven bills”. This means that two households using the same amount of electricity can face differences in bills of up to 15% depending on where they live, and another 22% depending on payment method or contract type.
Laundry costs more – or less – depending on where you live. Carlos G. Lopez / shutterstock
A social tariff would be fairer. Through a lower unit rate or a bill discount it would protect households with the least room to cut energy use – such as older people, low-income households, those with medical-related electricity needs and renters in inefficient homes.
These policies can also encourage energy efficiency. For instance, in California, the state’s Care programme discounts electricity and gas bills for low-income households up to a set level of use. Beyond that point, rates revert to normal.
This is not unrealistic administratively. Portugal introduced automatic eligibility for its social energy tariff in 2016. This used existing tax and social security data to expand the number of households receiving support by 400%.
The UK already has the data infrastructure to do something similar through its benefits and tax system – energy companies wouldn’t have to find out household incomes themselves; they could just ask the government. The near-term step here is straightforward – ministers could ask the industry regulator Ofgem and energy companies to design an automatic, income-linked tariff for winter 2026, instead of waiting for another crisis response.
Emergency support
The second priority is to reduce immediate exposure to the most volatile and expensive fuels.
Government has traditionally responded to shocks like the Ukraine war with emergency bill support. However, these ill-targeted policies are impractical and do not reduce reliance on volatile fossil fuels. Unlike a social tariff, which is a continuous means-tested support payment, emergency support is often a one-off payment. Traditionally, emergency support is a flat payment to all households, meaning those on lower incomes benefit less in relative terms, though it can also be targeted at vulnerable households.
Transport is one immediate opportunity. Rather than (yet again) freezing fuel duty, the government could redirect this money into cheaper public transport for low-income and car-dependent households.
Germany’s €9 (£8) public transport ticket, introduced in 2022 during the energy and cost-of-living crisis, shows that governments really can act quickly when necessary.
Subsidised public transport could help out people struggling with expensive energy. PintoArt / shutterstock
Households that are off the gas grid and reliant on heating oil are especially exposed when global prices rise. Alongside short-term support, like the welcome £50 million announced last week, the government should consider targeted support to switch from oil to heat pumps. The economic case for heat pumps is especially strong for households relying on heating oil. This switch would immediately reduce their exposure to oil prices.
Help households access existing savings
The third priority is to ensure vulnerable households can benefit from money-saving features that are already available in the electricity system.
The near-term priority is not new schemes, but making existing ones usable. The government could require suppliers, local authorities and landlords to prioritise smart meters and other low-carbon technologies in social housing and private rentals, where people face the greatest barriers to accessing these savings. It could also fund trusted community organisations to help households choose suitable tariffs, avoid poor deals and access support if they fall into arrears.
This may sound less dramatic than a new subsidy scheme, but clarity matters in a price shock. Households cannot benefit from cheaper tariffs or smart systems they do not know about or cannot use, so financial support often flows most to those already best placed to respond.
The UK cannot prevent global energy price shocks, but it can choose who bears its greatest burden. What is missing is political will. If the government is serious about protecting vulnerable households, it needs a strategic short-term response that matches the scale of urgency.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
Approximately one month into the Iran war, public opinion on both sides of the Atlantic is decidedly opposed to this conflict. A recent CBS/YouGov poll shows that 60% of the public oppose military action against Iran, as do a similar percentage in the UK: 59%.
As a political scientist who studies public attitudes about foreign policy and the use of force, my research addresses an important question: under what conditions do people support military action? Based on this research, the widespread opposition to American military action against Iran is completely understandable, as the action lacks the usual foundations for support from domestic as well as international audiences.
Decades of research in political science show that broad support for use of the military rests on three key pillars: purpose, likelihood of success and legitimacy. When these elements are present, support can be high. It can even be maintained in the face of significant costs, both financial and in terms of lives lost. When they are absent, support tends to be weak, polarised and prone to erosion.
At present, these key ingredients are missing.
What’s the objective?
First and foremost, the Trump administration’s strategic rationale remains poorly articulated. Public support for military action is strongly tied to policy goals. When citizens believe force is being used to prevent a clear and immediate danger, they are far more likely to support it. But the US has not made the case that Iran was close to achieving a nuclear weapon – or posed other imminent threats for that matter. The CBS/YouGov poll confirmed that the public does not believe the rationale for war has been convincingly articulated – by a count of 68% to 32%.
In the first few days of the bombing, US president Donald Trump strongly advocated regime change as a reason for the war. But among voters there is little appetite to change another country’s domestic politics. A majority thinks this is not important, although now it has started, a small majority of respondents (53%) felt it would be a mistake to leave the regime in power. It’s a big political risk though – American voters don’t have to cast their memories back far to think of unsuccessful regime change missions.
What does winning look like?
The ambiguity surrounding mission goals complicates the second key element: what constitutes success? Airstrikes can damage nuclear facilities or disrupt Iran’s ballistic weapons programme. But they can’t eliminate the scientific knowledge or technical know-how which will enable to regime to rebuild. And, clearly, if previous strikes were as decisive as the US president, Donald Trump, has claimed, the current action would be unnecessary. The rest of the world knows that too.
The same question about what qualifies as success also applies to regime change. Killing the leadership is one thing, but creating a stable government that breaks from the Islamic revolution and protects American interests is quite another. The essential nature of politics is that there are competing factions, which will want to build or maintain governmental structures that advantage those interests. The the type of government Iran might adopt under a regime change scenario – and which faction(s) will control the levers of domestic power – are two dramatic unknowns.
Any plan to completely disempower the Islamic Revolutionary Guard Corps (IRGC) would risk a re-run of the disastrous de-Baathification strategy after the 2003 invasion of Iraq. Leaving the IRGC even partially in power leaves the civilian population at continued risk and would hardly make it easier to achieve American aims – whatever they may be. As we’re still seeing in Libya, a power struggle between factions is unlikely to produce the sort of result the region – and the wider world – want to see.
Is this a legitimate war?
Finally, there are severe concerns regarding the legitimacy of the war. Citizens rely on cues from their political leaders and institutions to inform their view about the use of force. The Trump administration had not made a sustained case for the need for military action before the war, nor has it secured Congressional authorisation or bipartisan support. There is no clear domestic consensus supporting the use of force.
Not only is there no clear signal of legitimacy domestically, the same is true internationally. Multilateral backing — especially through institutions such as the United Nations security council — has historically played an important legitimising role (especially to reassure domestic audiences who want a second opinion). This is is absent here – in fact, key US allies have expressed their opposition. The UK’s prime minister, Keir Starmer, has declared military action against Iran is “not our war”, language remarkably similar to that of Germany’s defence minister, Boris Pistorius. Having foregone building international support prior to the use of force, the US is now struggling for support from allies — particularly when it comes to protecting shipping through the Strait of Hormuz.
None of this means the operation will be uniformly unpopular. Partisan attachment is also important: those who back the administration are likely to view the operation more favourably. Accordingly, a majority of Republicans (84%) support the action, though there is a strong divide between Maga (92%) and non-Maga (70%) Republicans.
Meanwhile, Democrats (92%) and independents (69%) overwhelmingly disapprove of the conflict, so domestic support for the conflict is extremely narrow. The factors that sustain backing beyond a president’s core supporters — perceived necessity with clear strategic goals, confidence in eventual success of the mission, and legitimacy conferred by domestic or international institutions — are conspicuously absent.
Over time, events on the ground may change how the public views the conflict. Iranian efforts to expand the scope of conflict – particularly when directed at US allies – could swing support towards the American action. Or, a unified Iranian opposition could quickly coalesce on who and what replaces the Islamic Republic government. These are just two possibilities seen through rose-tinted spectacles – frankly, developments that complicate America’s position seem just as likely.
Without significant changes in clarity of goal, verifiable indicators of success, or signals of legitimacy from persuasive actors outside the administration, support will diminish. But the consequences are graver than the domestic popularity of an American military operation. Sidelining institutional constraints – such as Congressional authorisation and international institutions – erodes limits on the use of force.
When the US ignores these constraints, it invites other countries to do the same, resulting in a more unstable and insecure world.
Jason Reifler has received funding for research examining public opinion about foreign policy and the use of military force from the Economic and Social Research Council (UK), Volkswagen Stiftung, and National Science Foundation (US).
*Furor teutonicus* représente la bataille de Teutobourg, en l’an 9 de notre ère. Dans une forêt de l’Allemagne actuelle, trois légions romaines tombent dans une embuscade mise en place par une coalition de tribus germaniques et sont massacrées. Paja Jovanović, 1888
La guerre asymétrique, formule devenue omniprésente depuis quelques décennies dans les analyses des conflits contemporains, est en réalité un phénomène aussi ancien que la guerre elle-même. Partout, toujours, des belligérants moins puissants que leurs adversaires ont cherché à employer les moyens les plus variés pour venir à bout de leurs ennemis plus nombreux et mieux équipés.
L’asymétrie fait désormais partie du vocabulaire des états-majors et des autorités politiques engagées sur des théâtres extérieurs. Les publications sur le sujet se sont également fortement multipliées depuis quelques années, et la guerre menée par les États-Unis et Israël contre l’Iran a confirmé, depuis février 2026, l’importance de ce phénomène.
Le terme est désormais tellement utilisé qu’on en oublierait presque qu’il était, il y a encore quelques années, totalement inconnu du grand public, et à peine mentionné dans les cercles d’experts. La situation a fortement changé avec ce que de nombreux observateurs qualifient de période post-guerre froide, née sur les ruines du World Trade Center en septembre 2001, et en marge de la guerre contre le terrorisme et des conflits de basse intensité opposant des puissances à des acteurs beaucoup plus faibles, qu’ils soient étatiques ou non.
Pourtant, ce type de conflit est bien plus ancien que les interventions des forces des États-Unis et de leurs alliés en Afghanistan en 2001 et en Irak en 2003.
Définition de la guerre asymétrique
Étymologiquement inscrite dans la négation, l’asymétrie est indissociable de la symétrie, mais aussi de la dissymétrie, moins souvent évoquée, dont elle se distingue cependant assez nettement. La symétrie caractérise la « juste proportion », notamment en matière d’architecture. Ainsi, la symétrie suppose au moins deux éléments pouvant être comparés. L’asymétrie est l’absence volontaire de symétrie, et la dissymétrie est un défaut de symétrie – généralement par erreur, mais cela peut être volontaire dans certains cas. Dans ces conditions, l’asymétrie semble plus catégorique que la dissymétrie, car la « juste proportion » y est absente et ne peut pas être corrigée.
Au niveau stratégique, la symétrie est perçue comme le combat à armes égales ; la dissymétrie est la recherche par l’un des combattants d’une supériorité qualitative et/ou quantitative (on parle ici de « stratégie du fort au faible ») ; et l’asymétrie correspond à la démarche inverse, qui consiste à exploiter toutes les faiblesses de l’adversaire pour être le plus nuisible possible.
En s’appuyant sur le constat d’un déséquilibre capacitaire, l’asymétrie est donc une stratégie du faible au fort qui consiste à refuser les règles du combat imposées par l’adversaire et à contourner ses forces, rendant ainsi toutes les opérations totalement imprévisibles.
Cela suppose à la fois l’utilisation de forces non prévues à cet effet et surtout insoupçonnables (comme les civils) ; d’armes contre lesquelles les moyens de défense ne sont pas toujours adaptés (dernièrement, les drones) ; de méthodes situées hors du cadre de la guerre conventionnelle (guérilla, terrorisme) ; de lieux d’affrontement imprévisibles (centres-villes, lieux publics) ; et de l’effet de surprise, cette dernière caractéristique étant sans doute la plus importante, car elle permet de réduire le déséquilibre entre les belligérants.
Employant des moyens techniquement simples, l’asymétrie peut ainsi être assimilée à l’« arme du pauvre », dans la mesure où elle permet à de multiples acteurs ne disposant que de moyens très limités d’avoir une capacité de nuisance totalement disproportionnée.
Il est également possible que des acteurs puissants optent délibérément pour une stratégie de guerre asymétrique, confondant même ce concept avec celui de « génie militaire », comme si celui-ci supposait finalement le triomphe au-delà de toutes les espérances, le niveau des forces engagées étant très faible en comparaison aux résultats obtenus. Dès lors qu’elle peut être privilégiée par le faible comme par le fort, la guerre asymétrique est ainsi une ruse déployée à une échelle pouvant varier.
Alternative par défaut ou par choix à une confrontation frontale dite traditionnelle, et réponse à la recherche de dissymétrie par les puissants, la guerre asymétrique se généralise. Compte tenu de l’improbabilité de guerres entre les grandes puissances et de l’implication quasi systématique de ces dernières dans des confrontations entre des acteurs plus faibles, la question de savoir si tous les conflits contemporains sont par nature des guerres asymétriques mérite a minima d’être posée.
La stratégie du faible au fort
La notion de guerre asymétrique trouve dans l’histoire de multiples exemples de sa mise en application, tant au niveau stratégique que tactique.
Sur tous les continents, de nombreux cas nous permettent de vérifier en grandeur nature les résultats obtenus par le choix de l’asymétrie dans des conflits armés. Détail important, il convient de noter que les moyens asymétriques ont été utilisés à la fois par des États et par des groupes non étatiques, quelle que soit leur importance. Mais une chose est certaine : l’asymétrie n’est pas un fait nouveau.
Les empires ne purent s’y soustraire – les barbares qui pillèrent Rome et les révoltés à plusieurs époques dans l’histoire de Chine disposaient de moyens nettement inférieurs à ceux de leurs adversaires – et certaines grandes batailles offrirent même l’occasion aux faibles de vaincre les forts là où les rapports de force ne leur laissaient a priori pas la moindre chance – la victoire écrasante des Anglais sur la chevalerie française à Azincourt en 1415 est sans doute l’exemple le plus significatif, mais il n’est pas isolé.
De manière répétitive et sur des théâtres très différents, on relève la même équation : là où les empires, les royaumes les plus riches et les plus puissants ont voulu exploiter leur supériorité pour s’imposer durablement, leurs adversaires ont développé par défaut des stratagèmes leur permettant de contourner les moyens de cette puissance. C’est ainsi que, tout au long de l’histoire, se sont mises en place des guerres asymétriques, les adversaires étant au final rarement au même niveau.
Seules les batailles du XIXᵉ siècle, inaugurées lors des campagnes napoléoniennes et organisées sur les bases définies par Carl von Clausewitz (introduisant le concept de victoire écrasante en opposition aux « guerres en dentelle » des XVIIᵉ et XVIIIᵉ siècles), et plus encore la Première Guerre mondiale ont été l’occasion d’assister à des guerres réellement symétriques, dans lesquelles les belligérants étaient de force presque égale et ne devaient leur victoire qu’à des circonstances particulières et/ou au génie tactique de leurs généraux. Parfois, ces batailles s’éternisaient, aucun des combattants n’étant en mesure de prendre le dessus, les tactiques et les moyens utilisés étant, approximativement, les mêmes de part et d’autre.
Les déséquilibres capacitaires hérités de la révolution industrielle, des guerres de colonisation et de la décolonisation marquèrent le retour de l’opposition du fort au faible (guerre dissymétrique), et l’utilisation par ce dernier de stratégies de contournement avec des résultats parfois surprenants pour y répondre (guerre asymétrique).
Les guerres d’Algérie et du Vietnam, la résistance à l’occupation soviétique de l’Afghanistan, l’opération en Somalie, la guerre de Tchétchénie ou encore la campagne menée au Kosovo – ou plus exactement les tactiques de camouflage et de leurres observées sur le terrain dans les rangs des forces serbes – sont des exemples plus récents de guerre asymétrique. Cette manière de faire la guerre est contraire aux règles chevaleresques au Moyen Âge, au respect des conventions sociales pendant les siècles modernes, et à une certaine idée de l’éthique et du droit de la guerre dans les périodes plus récentes. En clair, la guerre asymétrique fut longtemps diabolisée et assimilée en Occident à des pratiques indignes des États.
De la Bible aux Mongols, en passant par Sun Tzu
Dans la tradition occidentale, l’origine mythologique de l’asymétrie est cependant plus glorifiante, et peut être attribuée à l’épisode biblique du jeune David, triomphant du Philistin Goliath aux abords de Jérusalem. Face à un géant, disposant par ailleurs d’armes puissantes, le jeune berger s’est servi de son génie pour éviter le combat, utilisant une simple fronde et frappant mortellement son adversaire à la tête. Le rapport de force était totalement déséquilibré, et c’est pourtant le plus faible qui a triomphé. La Bible mentionne que, « ainsi, avec une fronde et une pierre, David fut plus fort que le Philistin ; il le terrassa et lui ôta la vie, sans avoir d’épée à la main ».
Ce combat symbolise la victoire de la bravoure face aux moyens, et de l’intelligence face à la force physique. Dès lors, les fidèles comprennent que, si la cause qu’ils défendent est juste, peu importe les moyens dont ils disposent, ils pourront parvenir à leurs fins pour vaincre leurs adversaires. Pour devenir roi, plus besoin d’être puissant, du moins au vu des critères traditionnels. Seuls comptent le génie et l’aptitude à vaincre n’importe quel type d’adversaire. L’asymétrie est ainsi perçue comme un moyen de récompenser les mérites quand la force brute ne le permet pas, mais elle n’est pas considérée comme un choix stratégique.
Tandis que l’asymétrie correspondait, dans la civilisation occidentale, à une intervention divine offrant la ruse au jeune David, s’est développée en Asie orientale une véritable pensée stratégique proposant l’asymétrie comme moyen de guerre. Au VIᵉ siècle avant notre ère, une époque où la Chine traversait la période chaotique dite des « royaumes combattants », Sun Tzu s’est penché sur les meilleurs stratagèmes permettant de limiter ses propres dégâts, tout en multipliant ceux de l’adversaire, même si celui-ci est plus fort. Sa pensée – dont l’objectif est de faire croire à l’adversaire qu’il maîtrise la situation de manière à pouvoir le duper plus facilement – s’est répandue en Asie orientale, puis progressivement dans le reste du monde.
Après l’œuvre de Sun Tzu, de nombreux autres théoriciens chinois se lancèrent dans la rédaction d’études sur la guerre. Shang Yang, contemporain de Sun Tzu, et sa guerre défensive, ou Sima Qian (fin du IIᵉ siècle avant notre ère) et ses biographies des généraux marquèrent ainsi l’histoire de la guerre dans la civilisation chinoise, avec la nécessité de miser sur les stratégies de contournement quand les conditions de la victoire ne sont pas remplies.
Pour les théoriciens chinois de la guerre, si la victoire reste l’objectif ultime, comme en Occident, les moyens pour y parvenir sont multiples, et passent notamment par la patience et l’analyse rigoureuse des forces et des faiblesses de l’adversaire. Dès lors, même le faible a ses chances contre le fort, à condition de savoir refuser le combat quand celui-ci est perdu d’avance, et de porter ses attaques au bon moment et au bon endroit.
Sun Tzu fut également l’un des premiers stratèges à s’interroger sur « ce qu’il faut avoir prévu avant le combat », faisant des préparatifs et du renseignement l’une des clés de la victoire. Pour lui, un général doit savoir cinq choses avant de s’engager dans la bataille : 1) savoir s’il peut combattre et quand il faut cesser ; 2) savoir s’il faut engager peu ou beaucoup ; 3) savoir gré aux simples soldats autant qu’aux officiers ; 4) savoir mettre à profit toutes les circonstances ; 5) savoir que le souverain approuve tout ce qui est fait pour son service et sa gloire.
Ces différentes recommandations sont particulièrement entendues des acteurs asymétriques, qui comprennent qu’elles doivent impérativement être remplies, d’abord dans un but de survie, et le cas échéant afin de remporter le combat.
L’islam des premiers temps fut de son côté également caractérisé par la stratégie indirecte d’un peuple disposant de moyens rudimentaires, mais parvenant rapidement à vaincre ses adversaires et à étendre son influence. Les Mongols, face à un empire chinois infiniment plus peuplé et nettement plus avancé, mais aussi les Ottomans et les peuples d’Afrique, notamment face aux conquérants occidentaux, développèrent également des stratégies asymétriques avec des résultats spectaculaires.
De la guérilla aux conflits contemporains
C’est avec la guérilla et les théories qui y sont associées puis, plus récemment, avec le terrorisme transnational dont les puissances firent les frais que la guerre asymétrique est progressivement revenue en vogue en Occident.
De l’Espagne dominée par l’empire napoléonien à Che Guevara, en passant par Lawrence d’Arabie ou Mao Zedong, les moyens de guerre proposés dans le cadre de la guérilla sont totalement asymétriques. C’est en s’infiltrant au sein même des territoires adverses qu’ils obtiennent des succès, pas en s’attaquant frontalement à des forces armées supérieures en nombre et en matériel.
La guérilla s’est immédiatement imposée comme l’arme du faible, voire de l’inculte en matière militaire, face au soldat professionnel bien armé, bien entraîné et mené par un général instruit. La guérilla fut aussi et surtout théorisée, sur la base des expériences et des testaments de ces acteurs. Le plus célèbre de ces « nouveaux testaments » de l’asymétrie est incontestablement la Guerre de guérilla, écrit par Che Guevara en 1959, dans lequel est démontré qu’une armée populaire peut battre une armée régulière, quels que soient les moyens dont les « combattants de la liberté » disposent. Guevara considérait qu’il n’est pas nécessaire de s’appuyer sur une large base, mais qu’un petit foyer ou un petit groupe d’idéalistes en armes, établi loin des villes, peut entraîner l’adhésion de tous les mécontents et des révolutionnaires.
Le terrorisme peut-il de son côté être considéré comme une manifestation de guerre asymétrique ? Indiscutablement, l’invisibilité et le caractère imprévisible des attaques terroristes sont asymétriques, car ils se caractérisent par la faiblesse des moyens engagés. Le terrorisme transnational apparaît ainsi comme le degré ultime de la guerre asymétrique, car il s’intègre à l’intérieur même des sociétés qu’il combat, ce qui le rend d’autant plus difficile à détecter et à prévenir.
Le terrorisme transnational et le risque qu’il fait peser sur la sécurité dans les sociétés contemporaines fut à l’origine du regain d’intérêt pour les guerres asymétriques, et c’est sans surprise que, après 2001, ce concept a fait une entrée fracassante dans les réflexions des états-majors, au point d’inspirer des innovations stratégiques, comme la contre-insurrection déployée en Irak, avec des résultats mitigés mais qui confirment une nécessaire adaptation du fort aux pratiques du faible.
Un type de guerre qui n’est pas près de disparaître
La fin de la bipolarité a ouvert le champ à des formes de conflits restées relativement silencieuses tout au long du XXᵉ siècle, dans sa deuxième partie surtout, opposant des adversaires aux moyens limités, soit des États faibles, soit des acteurs non étatiques, et consacrant ainsi ce que certains analystes qualifièrent de « retournement du monde ». Ce regain de violence a poussé Washington, seule superpuissance rescapée de la guerre froide, à s’interroger sur les menaces dont les États-Unis (et par extension le « monde » dans son ensemble) pourraient désormais faire l’objet.
Dans un contexte marqué par une remise en cause de plus en plus prononcée de la puissance américaine, tant dans ses aspects politico-diplomatiques que militaires (les expériences de l’Irak et de l’Afghanistan ont renforcé ce phénomène), la guerre asymétrique semble avoir de beaux jours devant elle et impacte ainsi considérablement les conflits contemporains. Le cas de l’opération menée par Israël et les États-Unis en Iran vient le confirmer.
Barthélémy Courmont ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.