En quoi le cas du Louvre questionne-t-il la sécurité des musées ?

Source: The Conversation – France (in French) – By Fabrice Lollia, Docteur en sciences de l’information et de la communication, chercheur associé laboratoire DICEN Ile de France, Université Gustave Eiffel

La mission des musées est tiraillée entre protection du patrimoine et ouverture au public, des objectifs difficiles à concilier. Wilfredor/Wikimédia Commons

Le cambriolage du Louvre en 2025 n’a pas seulement été spectaculaire. Il rappelle également que, malgré la sophistication croissante des dispositifs numériques, la sécurité des musées reste avant tout une affaire humaine. Alors, comment sécuriser les œuvres tout en les rendant accessibles au plus grand nombre ?


En octobre 2025, le Louvre a été victime d’un cambriolage spectaculaire. De nuit, les voleurs ont pénétré dans le musée grâce à un simple monte-charge, déjouant un dispositif de sécurité pourtant hautement technologique, pour emporter l’équivalent de 88 millions d’euros de bijoux.

Ce contraste illustre un paradoxe contemporain : à mesure que la sécurité se renforce technologiquement, ses vulnérabilités deviennent de plus en plus humaines et organisationnelles. Le Louvre n’est ici qu’un symbole d’un enjeu plus large : comment protéger la culture sans en altérer l’essence ni l’accessibilité ?

Les musées, acteurs méconnus de la sécurité mondiale

Le cambriolage du Louvre n’a fait que révéler un problème plus profond. Un prérapport de la Cour des comptes de 2025 pointe un retard préoccupant dans la sécurisation du musée : 60 % des salles de l’aile Sully et 75 % de celles de l’aile Richelieu ne sont pas couvertes par la vidéosurveillance. De plus, en quinze ans, le Louvre a perdu plus de 200 postes de sécurité, alors que sa fréquentation a augmenté de moitié. Les budgets consacrés à la sûreté, soient à peine 2 millions d’euros sur 17 millions prévus pour la maintenance, traduisent une érosion structurelle des moyens humains.

Selon les lignes directrices du Conseil international des musées, la sécurité muséale repose sur trois piliers. D’abord, la prévention, qui s’appuie notamment sur le contrôle d’accès, la gestion des flux et l’évaluation des risques. Ensuite, la protection, mise en œuvre par la vidéosurveillance, la détection d’intrusion et les protocoles d’urgence. Enfin la préservation, qui vise à assurer la continuité des activités et la sauvegarde des collections en cas de crise.

Mais dans les faits, ces principes se heurtent à la réalité des contraintes budgétaires et des architectures muséales modernes, pensées comme des espaces ouverts, transparents et très accessibles, mais structurellement difficiles à sécuriser.

Les musées français ont déjà connu plusieurs cambriolages spectaculaires. En 2010, cinq toiles de maître (Picasso, Matisse, Modigliani, Braque et Léger) ont été dérobées au musée d’Art moderne de la Ville de Paris. En 2024, le musée Cognac-Jay a été victime d’un braquage d’une grande violence en plein jour pour un butin estimé à un million d’euros. Ces affaires rappellent que les musées, loin d’être des forteresses, sont des espaces vulnérables par nature, pris entre accessibilité, visibilité et protection. Le Louvre incarne une crise organisationnelle plus large où la sûreté peine à suivre l’évolution du risque contemporain.

Le musée, nouveau maillon du système sécuritaire

Longtemps, la sécurité des musées s’est pensée de manière verticale, centrée sur quelques responsables et des protocoles stricts. Or, ce modèle hiérarchique ne répond plus à la complexité des menaces actuelles.

La sûreté muséale repose désormais sur une circulation horizontale de l’information, c’est-à-dire partagée entre tous les acteurs et mobilisant conservateurs, agents, médiateurs et visiteurs dans une vigilance partagée. Cela prend la forme d’un musée où chacun a un rôle clair dans la prévention, où l’information circule rapidement, où les équipes coopèrent et où la sécurité repose autant sur l’humain que sur la technologie.

Les risques, quant à eux, dépassent largement les frontières nationales : vol d’œuvres destinées au marché noir, cyberattaques paralysant les bases de données patrimoniales et, dans une moindre mesure, activisme climatique ciblant les symboles culturels. La protection du patrimoine devient ainsi un enjeu global impliquant États, entreprises et institutions culturelles.

Au Royaume-Uni, les musées sont désormais intégrés aux politiques antiterroristes, illustrant un processus de sécurisation du secteur. En Suède, des travaux montrent que la déficience de moyens visant à la protection muséale entraîne une perte d’efficacité, dans la mesure où la posture adoptée est plus défensive que proactive.

Protéger le patrimoine, une façon de faire société

Mais cette logique de soupçon transforme la nature même du musée. D’espace de liberté et de transmission, il tend à devenir un lieu de contrôle et de traçabilité. Pourtant, dans un monde traversé par les crises, le rôle du musée s’élargit. Il ne s’agit plus seulement de conserver des œuvres, mais de préserver la mémoire et la cohésion des sociétés.

Comme le souligne Marie Elisabeth Christensen, chercheuse spécialisée dans la protection du patrimoine en contexte de crise et les enjeux de sécurisation du patrimoine culturel, la protection du patrimoine relève du champ de la sécurité humaine. Ses travaux montrent comment, dans des zones de conflits comme Palmyre en Syrie, la sauvegarde d’un site ou d’une œuvre devient un acte de résilience collective, c’est-à-dire une manière, pour une communauté frappée par la violence et la rupture, de préserver ses repères, de maintenir une continuité symbolique et de recréer du lien social, contribuant ainsi à la stabilisation des sociétés.

Cependant, cette transformation demeure profondément inégale. Les grands musées européens et américains disposent des moyens et de la visibilité nécessaires pour assumer ce rôle tandis qu’au Sud, de nombreuses institutions restent fragmentées, marginalisées et subissent le manque de coordination au niveau international. Cette disparité révèle une gouvernance patrimoniale encore inachevée, dépendante d’agendas politiques plus que d’une stratégie mondiale de solidarité culturelle.

La protection du patrimoine devrait être pleinement intégrée aux politiques humanitaires internationales, au même titre que la santé ou l’éducation. Car protéger une œuvre, c’est aussi protéger la mémoire, les valeurs et l’avenir d’une société.

Le piège du technosolutionnisme

Face aux menaces qui pèsent sur les lieux culturels, la tentation est forte de répondre par une surenchère technologique. Après chaque incident, la même conclusion s’impose : il aurait fallu davantage de caméras, de capteurs ou d’outils de surveillance. Reconnaissance faciale, analyse comportementale, biométrie… autant de dispositifs souvent présentés comme des réponses évidentes. Les dispositifs se multiplient, nourrissant l’idée que le risque pourrait être entièrement maîtrisé par le calcul.

Ce réflexe, qualifié de technosolutionnisme, repose pourtant sur une illusion, celle d’une technologie capable de neutraliser l’incertitude. Or, comme l’ont montré des travaux en sciences sociales, la technologie ne se contente pas de « faire mieux fonctionner » les choses : elle change la façon dont les personnes se font confiance, la manière dont le pouvoir s’exerce et la répartition des responsabilités. Autrement dit, même avec les outils les plus sophistiqués, le risque reste profondément humain. La sécurité muséale relève donc avant tout d’un système social de coordination, de compétences humaines et de confiance, bien plus que d’un simple empilement de technologies.

La rapporteuse spéciale de l’ONU pour les droits culturels alertait déjà sur ce point : vouloir protéger les œuvres à tout prix peut conduire à fragiliser la liberté culturelle elle-même. La sécurité du patrimoine ne peut se limiter aux objets. Elle doit intégrer les personnes, les usages et les pratiques culturelles qui leur donnent sens.

Protéger sans enfermer

Contre la fascination technologique, une approche de complémentarité s’impose. Les outils peuvent aider, mais ils ne remplacent ni l’attention ni le discernement humain. La caméra détecte, mais c’est le regard formé qui interprète et qualifie la menace. Les agents de sécurité muséale sont aujourd’hui des médiateurs de confiance. Ils incarnent une forme de présence discrète mais essentielle qui relie le public à l’institution. Dans un monde saturé de dispositifs, c’est cette dimension humaine qui garantit la cohérence entre sécurité et culture.

La chercheuse norvégienne Siv Rebekka Runhovde souligne, à propos des vols d’œuvres du peintre Edvard Munch, le dilemme permanent entre accessibilité et sécurité. Trop d’ouverture fragilise le patrimoine, mais trop de fermeture étouffe la culture. Une sursécurisation altère la qualité de l’expérience et la confiance du public. La sécurité la plus efficace réside dans celle qui protège sans enfermer, rendant possible la rencontre entre œuvre et regards.

La sécurité muséale n’est pas seulement un ensemble de dispositifs, c’est également un acte de communication. Elle exprime la manière dont une société choisit de gérer et de protéger ce qu’elle estime essentiel et de négocier les frontières entre liberté et contrôle. Protéger la culture ne se réduit pas à empêcher le vol. C’est aussi défendre la possibilité de la rencontre humaine à l’ère numérique.

The Conversation

Fabrice Lollia ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. En quoi le cas du Louvre questionne-t-il la sécurité des musées ? – https://theconversation.com/en-quoi-le-cas-du-louvre-questionne-t-il-la-securite-des-musees-272835

Are we in an AI bubble? Ponzi schemes and financial bubbles: lessons from history

Source: The Conversation – France – By Paul David Richard Griffiths, Professor of Finance; (Banking, Fintech, Corporate Governance, Intangible Assets), EM Normandie

Charles Ponzi (March 3, 1882–January 18, 1949). Charles Ponzi was a businessman born in Italy who became known as a swindler for his money scheme. Wikimediacommons

Many investors are asking themselves if we are living in an AI bubble; others have gone beyond that and are simply asking themselves, until when? Yet the bubble keeps growing, fuelled by that perilous sentiment of “fear of missing out”. History and recent experience show us that financial bubbles are often created by investor overenthusiasm with new “world-changing” technologies and when they burst, they reveal surreal fraud schemes that develop under the cover of the bubble.

A Ponzi scheme pays existing investors with money from new investors rather than actual profits, requiring continuous recruitment until it inevitably collapses. A characteristic of these schemes is that they are hard to detect before the bubble bursts, but amazingly simple to understand in retrospect.

In this article we address the question What footprints do Ponzi schemes leave in technology-driven financial bubbles that might help us anticipate the next one to emerge under cover of the AI frenzy? We shall do this by comparing the “Railway King” George Hudson’s Ponzi of the 1840s with Bernie Madoff’s Ponzi enabled by the ICT (information and communications technology) and dotcom of the 1990s-2000s and sustained by the subsequent US housing bubble.

Macroeconomic climate, regulations and investor expectations

The railway mania in the UK started in 1829 as a result of investors’ expectations for the growth of this new technology and the lack of alternative investment vehicles caused by the government’s halting of bond issuance. The promise of railway technology created an influx of railway companies, illustrated by the registration of over fifty in just the first four months of 1845. Cost projections for railway development were understated by over 50 percent and revenue projections were estimated at between £2,000 and £3,000 per mile, despite actual revenues closer to £1,000 to £1,500 per mile. Accounting standards were rudimentary, creating opportunities for reporting discretion such as delaying expense recognition, and director accountability was the responsibility of shareholders rather than delegating it to external auditors or state representatives. Hudson, who was also a member of parliament, promoted the deregulation of the railway sector.

George Hudson’s Ponzi and Bernie Madoff’s Ponzi

Madoff’s reputation was built upon his success in the 1970s with computerization and technological innovation for trading. The dotcom bubble was fuelled by the rapid expansion of technology companies, with over 1,900 ICT companies listing in US exchanges between 1996 and 2000, propelled by which his BLMIS fund held $300 million in assets by the year 2000. Madoff’s scheme also aligned with the rapid growth of derivatives such as credit default swaps (CDS) and collateralized debt obligations (CDO), which grew 452 percent from 2001 to 2007. Significant market-wide volatility created a norm for outsized returns that hid the infeasibility of Madoff’s promised returns. These returns were considered moderate by investors, who failed to detect the implausibility of the long-term consistency of Madoff’s returns–this allowed the scheme to continue undetected. Madoff’s operations were facilitated by the fact that before the Dodd-Frank Act of 2010, hedge-fund SEC registration was voluntary; and by the re-prioritization of government security resources after 9/11, that led to a reduction of more than 25 percent in white-collar crime investigation cases opened between 2000 and 2003. The infeasibility of Madoff’s returns was overlooked by the SEC despite whistleblower reports instigating an SEC investigation–this reflects the SEC’s and other regulatory bodies’ lack of hedge-fund trading knowledge. It could also have been influenced by Madoff’s close relationship with the regulatory agencies, given his previous roles as Chairman of Nasdaq and an SEC market structure adviser.

At the time of the railway bubble-bust, Bank of England interest rates were at a near-century low, and similarly the FED’s lowering of interest rates in the 2000s reduced the cost of mortgages, which boosted demand and thus helped inflate housing prices. In both cases the markets were flush with cheap money and when everyone is making money (or thinking so), uncomfortable questions are not asked.

The perpetrators’ style and their downfall

Both Hudson and Madoff provided scarce information of their operations to fellow directors and shareholders. The former notoriously raised £2.5 million in funds without providing investment plans. Madoff employed and overcompensated under-skilled workers to deter operational questions and avoided hosting “capital introduction” meetings and roadshows to avoid answering questions from well-informed investment professionals–he instead found new investors through philanthropic relationships and network ties. There is evidence that shareholders were partially aware of Hudson’s corrupt business conduct but they did not initially object.

When their respective bubbles burst, in both cases their obscure business methods were unveiled and it was made evident that, in typical Ponzi-style, they were using fresh capital, and not investment profits, to pay dividends to investors. It was also revealed that they were using investor funds to finance their luxurious lifestyles. Hudson embezzled an estimated £750,000 (approximately £74 million in today’s money) from his railway companies, while Madoff’s fraud reached $65 billion in claimed losses, with actual investor losses of around $18 billion. Both ended in disgrace, Hudson fleeing to France and Madoff dying in jail.

On the trail of the fox

Beware when you see AI companies of ever-increasing market value, headed by charismatic and well-connected leaders–it is worrying that the heads of AI giants have such close relationships with the White House. In those cases, it is imperative to analyse the quality of communications with shareholders and prospective investors, particularly in terms of allocation of capital and disclosure of detailed cash flows. It is not enough to rely on audited financial statements; it must go much deeper into an investment strategy – obviously, this will require auditors to up their game considerably.

When investors are in a frenzy,
Around the corner waits a Ponzi.


Geneva Walman-Randall contributed to this article as a research assistant for her research on the conditions surrounding the Bernie Madoff and George Hudson Ponzi schemes. She completed this research as a visiting student at St. Catherine’s College, Oxford.

The Conversation

Paul David Richard Griffiths ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. Are we in an AI bubble? Ponzi schemes and financial bubbles: lessons from history – https://theconversation.com/are-we-in-an-ai-bubble-ponzi-schemes-and-financial-bubbles-lessons-from-history-272188

Non, votre cerveau n’atteint pas soudainement sa maturité à 25 ans : ce que révèlent vraiment les neurosciences

Source: The Conversation – in French – By Taylor Snowden, Post-Doctoral Fellow, Neuroscience, Université de Montréal

Si vous faites défiler TikTok ou Instagram, vous tomberez inévitablement sur la phrase : « Votre lobe frontal n’est pas encore complètement développé. » C’est devenu la justification toute faite des neurosciences pour expliquer nos mauvaises décisions, comme commander un verre de trop au bar ou envoyer un SMS à un ex alors que vous vous étiez promis de ne pas le faire.

Le lobe frontal joue un rôle central dans les fonctions supérieures telles que la planification, la prise de décision et le jugement.

Il est rassurant de penser qu’une raison biologique explique parfois notre impulsivité ou nos hésitations. La vie entre 20 et 30 ans est imprévisible, et l’idée que votre cerveau n’a tout simplement pas fini de se développer peut rassurer d’une manière inattendue.

Mais l’idée que le cerveau, en particulier le lobe frontal, cesse de se développer à 25 ans est une idée fausse très répandue en psychologie et en neurosciences. Comme beaucoup de mythes, l’idée des « 25 ans » trouve son origine dans des découvertes scientifiques réelles, mais elle simplifie à l’extrême un processus beaucoup plus long et complexe.

De nouvelles recherches suggèrent que ce développement se prolonge en fait jusqu’à la trentaine. Cette nouvelle compréhension change notre vision de l’âge adulte et suggère que l’âge de 25 ans n’a jamais été considéré comme la ligne d’arrivée.


25-35 ans : vos enjeux, est une série produite par La Conversation/The Conversation.

Chacun vit sa vingtaine et sa trentaine à sa façon. Certains économisent pour contracter un prêt hypothécaire quand d’autres se démènent pour payer leur loyer. Certains passent tout leur temps sur les applications de rencontres quand d’autres essaient de comprendre comment élever un enfant. Notre série sur les 25-35 ans aborde vos défis et enjeux de tous les jours.

D’où vient le mythe des « 25 ans » ?

Ce chiffre magique provient d’études d’imagerie cérébrale menées à la fin des années 1990 et au début des années 2000. Dans une étude de 1999, des chercheurs ont suivi les changements cérébraux chez des enfants et des adolescents à l’aide de scanners répétés. Ils ont analysé la matière grise, qui est constituée de corps cellulaires et peut être considérée comme la partie « pensante » du cerveau.

Les chercheurs ont découvert que pendant l’adolescence, la matière grise subit un processus appelé « élagage ». Au début de la vie, le cerveau forme un très grand nombre de connexions neuronales ; avec l’âge, il élimine progressivement celles qui sont moins sollicitées, renforçant ainsi les plus utilisées.

Ces premiers travaux ont mis en lumière le rôle central des variations de volume de la matière grise dans le développement du cerveau.

Dans le cadre d’un travail de suivi influent mené par le neuroscientifique Nitin Gogtay, des participants ont passé un scanner cérébral tous les deux ans, dès l’âge de quatre ans. Les chercheurs ont montré que, dans le lobe frontal, les régions mûrissent de l’arrière vers l’avant.

Les régions les plus primitives, notamment celles responsables des mouvements musculaires volontaires, se développent en premier, tandis que les régions plus avancées — impliquées dans la prise de décision, la régulation émotionnelle et le comportement social — n’étaient pas encore complètement matures lors des derniers examens, réalisés autour de l’âge de 20 ans.

Comme les données s’arrêtaient à cet âge, les chercheurs ne pouvaient pas déterminer avec précision quand le développement s’achevait. L’âge de 25 ans s’est alors imposé comme une estimation approximative de cette fin supposée, avant de s’ancrer durablement dans la conscience culturelle.




À lire aussi :
Comment survivre au début de l’âge adulte ? Voici des stratégies d’adaptation


Ce que révèlent les recherches plus récentes

Depuis ces premières études, les neurosciences ont considérablement progressé. Plutôt que d’examiner isolément chaque région, les chercheurs étudient désormais l’efficacité avec laquelle les différentes parties du cerveau communiquent entre elles.

Une étude récente majeure a évalué l’efficacité des réseaux cérébraux, essentiellement la manière dont les différentes zones du cerveau sont interconnectées, à travers la topologie de la matière blanche. La substance blanche est constituée de longues fibres nerveuses qui relient différentes parties du cerveau et de la moelle épinière, permettant aux signaux électriques de circuler dans les deux sens.

Les chercheurs ont analysé les scanners de plus de 4200 personnes, de la petite enfance à 90 ans, et ont identifié plusieurs périodes clés du développement, dont une entre 9 et 32 ans, qu’ils ont baptisé « période adolescente ».

Pour toute personne ayant atteint l’âge adulte, il peut sembler déconcertant d’apprendre que son cerveau est encore « adolescent », mais ce terme signifie simplement que votre cerveau est en train de subir des changements importants.

D’après cette étude, pendant l’adolescence cérébrale, le cerveau équilibre deux processus clés : la ségrégation, qui regroupe les pensées apparentées en « quartiers », et l’intégration, qui construit des « autoroutes » pour relier ces quartiers entre eux. Selon les chercheurs, cette architecture complexe ne se stabilise pas en un modèle « adulte » avant le début de la trentaine.

L’étude a également révélé que la mesure de l’efficacité des réseaux, appelée « small worldness », était le meilleur indicateur pour déterminer l’âge cérébral dans ce groupe. On peut le comparer à un système de transport en commun : certains itinéraires nécessitent des arrêts et des correspondances. Augmenter le « small worldness » revient à ajouter des voies rapides, permettant aux pensées plus complexes de circuler plus efficacement à travers le cerveau.

Cependant, cette construction ne dure pas éternellement. Après l’âge de 32 ans environ, il y a un véritable tournant où ces tendances de développement changent de direction. Le cerveau cesse de donner la priorité à ces « voies rapides » et revient à la ségrégation pour verrouiller les voies que notre cerveau utilise le plus.

En d’autres termes, l’adolescence et la vingtaine sont consacrées à la connexion du cerveau, tandis que la trentaine est consacrée à la stabilisation et au maintien des voies les plus utilisées.




À lire aussi :
Une partie du cerveau humain grossit avec l’âge – voici ce que cela signifie pour vous


Tirer le meilleur parti d’un cerveau en construction

Si notre cerveau est encore en construction tout au long de la vingtaine, comment pouvons-nous nous assurer que nous construisons la meilleure structure possible ? Une réponse réside dans le renforcement de la neuroplasticité, c’est-à-dire la capacité du cerveau à se reconnecter.


Déjà des milliers d’abonnés à l’infolettre de La Conversation. Et vous ? Abonnez-vous gratuitement à notre infolettre pour mieux comprendre les grands enjeux contemporains.


Si le cerveau reste modifiable tout au long de la vie, la période de 9 à 32 ans constitue une occasion unique pour sa croissance structurelle. Des recherches suggèrent qu’il existe de nombreuses façons de favoriser la neuroplasticité.

Les exercices d’aérobie de haute intensité, l’apprentissage de nouvelles langues et la pratique de loisirs exigeants sur le plan cognitif, comme les échecs, peuvent renforcer les capacités neuroplastiques de votre cerveau, tandis que des facteurs tels que le stress chronique peuvent les entraver. Si vous voulez avoir un cerveau très performant à 30 ans, il est utile de le stimuler à 20 ans, mais il n’est jamais trop tard pour commencer.

Il n’existe pas de bouton magique qui s’active à 25 ans, ni même à 32 ans. Comme votre cerveau, vous participez à un projet de construction qui s’étend sur plusieurs décennies. Ne restez pas à attendre le moment où vous deviendrez adulte : faites des choix actifs pour soutenir ce projet. Faites des erreurs, mais rappelez‑vous que le béton n’est pas encore tout à fait pris.

La Conversation Canada

Taylor Snowden ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.

ref. Non, votre cerveau n’atteint pas soudainement sa maturité à 25 ans : ce que révèlent vraiment les neurosciences – https://theconversation.com/non-votre-cerveau-natteint-pas-soudainement-sa-maturite-a-25-ans-ce-que-revelent-vraiment-les-neurosciences-272591

Blue Monday is a myth but winter blues are real — how to cope in the cold months

Source: The Conversation – Canada – By Joanna Pozzulo, Chancellor’s Professor, Psychology, Carleton University

In 2005, psychologist Cliff Arnall coined the term “Blue Monday” as part of a marketing campaign for a British travel agency to encourage people to book a holiday during the winter. Using a pseudo-scientific formula, the third Monday in January was determined to be the “bluest” day of the year, marked by sadness, low energy and withdrawal from social interaction.

Although Blue Monday has been debunked, the feelings associated with a colder, darker season are real.

Seasonal affective disorder (SAD) is a recognized form of depression connected to seasonal variation, with symptoms such as fatigue, irritability, appetite changes, loss of interest in pleasurable activities and feelings of hopelessness. According to the Canadian Psychological Association, approximately 15 per cent of Canadians report at least some symptoms of SAD.

It’s believed that the disorder may be connected to decreased exposure to sunlight, which in turn disrupts people’s circadian rhythms — the internal clock that co-ordinates our biological processes such as sleep and hormone production.

We can’t dictate when the sun shines, but there are several evidence-based strategies to support “wintering well.” For example, creating a cozy reading nook equipped with a warm blanket, hot chocolate and a good book provides a dedicated space for self-care that promotes relaxation. It also helps with mindfulness, which involves focusing your attention on the present and accepting your thoughts and feelings without judgment.

Why mindset and expectations matter

According to Kari Leibowitz, psychologist and author of How to Winter: Harness Your Mindset to Thrive on Cold, Dark Days, the key to better wintering is reframing — changing one’s perspective to find a more positive, constructive or empowering interpretation of the situation.

Cultures that thrive in winter anticipate it, considering it meaningful. Reframing the season as something to look forward to can raise morale.

Try replacing negative language about winter as something to be dreaded or endured with more appreciative language. For example, winter can provide an opportunity to rest and recharge. By adopting a positive mindset, overall well-being may improve.

The benefits of winter outdoor activity

Spending time outdoors can lift the spirit and boost energy. And although winter has fewer hours of daylight, it is important to take advantage of them. Spend some time outside in the late morning and early afternoon, when natural light tends to peak.

Winter weather, however, can make outdoor activity unappealing. Cold and icy conditions can even be hazardous to health. Cold weather can increase the risk of cardiovascular events by constricting blood vessels and raising blood pressure.

To spend time outdoors safely, invest in appropriate clothing suited to the temperature. On colder days, engage in light activity such as walking and keep outdoor stints short (about 15 minutes).

What hygge can teach about slowing down

Hygge is a Danish and Norwegian word dating back to the 1800s used to denote the concept of enjoying a slower-paced life while connecting with people you care about.

Hygge is often associated with creating a pleasant environment, such as lighting candles or staying warm by a fire, to foster positiveness.

When indoors, sit near windows to work or read. Consider increasing indoor lighting brightness. Use light bulbs rated as “daylight,” and think about adding lamps to supplement overhead lighting. This can increase serotonin to improve mood and help regulate circadian rhythms that in turn can support improved sleep quality, energy and focus.

Hygge-type activities, like knitting, colouring and playing board games, can support overall well-being. Enjoying simple meals with others or spending quiet time alone in nature are also ways to embrace the season.




Read more:
4 research-backed ways to beat the winter blues in the colder months


Listening to seasonal changes and self-care

Winter is a natural time to slow down, rest and restore, as evidenced by bears hibernating and bumblebees going underground to survive. Use this time to prepare for a more active upcoming season.

To take advantage of the slower pace of the season, reduce over-scheduling when possible. Adjust sleep routines to suit individual needs. Enjoy quieter evenings and earlier bedtimes. Accept that lower energy levels are normal in winter and that the season offers an opportunity to do less without guilt.

Spending more time indoors during the winter provides an opportunity to reconnect with hobbies and activities that have brought you joy in the past. For example, doing puzzles can provide a break from screens, which can decrease stress. Reading a good book can also provide a mental escape, allowing people to disconnect from worries. Creative activities such as baking can encourage a sense of purpose.

Choosing activities that are enjoyable and meaningful offers the greatest benefits for overall well-being. For more evidence-based strategies and book recommendations, join my Reading for Well-Being Community Book Club.

The Conversation

Joanna Pozzulo receives funding from Social Sciences and Humanities Research Council.

ref. Blue Monday is a myth but winter blues are real — how to cope in the cold months – https://theconversation.com/blue-monday-is-a-myth-but-winter-blues-are-real-how-to-cope-in-the-cold-months-272882

Blue Monday is a myth but winter blues are real — how to cope in the cold months and ‘winter well’

Source: The Conversation – Canada – By Joanna Pozzulo, Chancellor’s Professor, Psychology, Carleton University

In 2005, psychologist Cliff Arnall coined the term “Blue Monday” as part of a marketing campaign for a British travel agency to encourage people to book a holiday during the winter. Using a pseudo-scientific formula, the third Monday in January was determined to be the “bluest” day of the year, marked by sadness, low energy and withdrawal from social interaction.

Although Blue Monday has been debunked, the feelings associated with a colder, darker season are real.

Seasonal affective disorder (SAD) is a recognized form of depression connected to seasonal variation, with symptoms such as fatigue, irritability, appetite changes, loss of interest in pleasurable activities and feelings of hopelessness. According to the Canadian Psychological Association, approximately 15 per cent of Canadians report at least some symptoms of SAD.

It’s believed that the disorder may be connected to decreased exposure to sunlight, which in turn disrupts people’s circadian rhythms — the internal clock that co-ordinates our biological processes such as sleep and hormone production.

We can’t dictate when the sun shines, but there are several evidence-based strategies to support “wintering well.” For example, creating a cozy reading nook equipped with a warm blanket, hot chocolate and a good book provides a dedicated space for self-care that promotes relaxation. It also helps with mindfulness, which involves focusing your attention on the present and accepting your thoughts and feelings without judgment.

Why mindset and expectations matter

According to Kari Leibowitz, psychologist and author of How to Winter: Harness Your Mindset to Thrive on Cold, Dark Days, the key to better wintering is reframing — changing one’s perspective to find a more positive, constructive or empowering interpretation of the situation.

Cultures that thrive in winter anticipate it, considering it meaningful. Reframing the season as something to look forward to can raise morale.

Try replacing negative language about winter as something to be dreaded or endured with more appreciative language. For example, winter can provide an opportunity to rest and recharge. By adopting a positive mindset, overall well-being may improve.

The benefits of winter outdoor activity

Spending time outdoors can lift the spirit and boost energy. And although winter has fewer hours of daylight, it is important to take advantage of them. Spend some time outside in the late morning and early afternoon, when natural light tends to peak.

Winter weather, however, can make outdoor activity unappealing. Cold and icy conditions can even be hazardous to health. Cold weather can increase the risk of cardiovascular events by constricting blood vessels and raising blood pressure.

To spend time outdoors safely, invest in appropriate clothing suited to the temperature. On colder days, engage in light activity such as walking and keep outdoor stints short (about 15 minutes).

What hygge can teach about slowing down

Hygge is a Danish and Norwegian word dating back to the 1800s used to denote the concept of enjoying a slower-paced life while connecting with people you care about.

Hygge is often associated with creating a pleasant environment, such as lighting candles or staying warm by a fire, to foster positiveness.

When indoors, sit near windows to work or read. Consider increasing indoor lighting brightness. Use light bulbs rated as “daylight,” and think about adding lamps to supplement overhead lighting. This can increase serotonin to improve mood and help regulate circadian rhythms that in turn can support improved sleep quality, energy and focus.

Hygge-type activities, like knitting, colouring and playing board games, can support overall well-being. Enjoying simple meals with others or spending quiet time alone in nature are also ways to embrace the season.




Read more:
4 research-backed ways to beat the winter blues in the colder months


Listening to seasonal changes and self-care

Winter is a natural time to slow down, rest and restore, as evidenced by bears hibernating and bumblebees going underground to survive. Use this time to prepare for a more active upcoming season.

To take advantage of the slower pace of the season, reduce over-scheduling when possible. Adjust sleep routines to suit individual needs. Enjoy quieter evenings and earlier bedtimes. Accept that lower energy levels are normal in winter and that the season offers an opportunity to do less without guilt.

Spending more time indoors during the winter provides an opportunity to reconnect with hobbies and activities that have brought you joy in the past. For example, doing puzzles can provide a break from screens, which can decrease stress. Reading a good book can also provide a mental escape, allowing people to disconnect from worries. Creative activities such as baking can encourage a sense of purpose.

Choosing activities that are enjoyable and meaningful offers the greatest benefits for overall well-being. For more evidence-based strategies and book recommendations, join my Reading for Well-Being Community Book Club.

The Conversation

Joanna Pozzulo receives funding from Social Sciences and Humanities Research Council.

ref. Blue Monday is a myth but winter blues are real — how to cope in the cold months and ‘winter well’ – https://theconversation.com/blue-monday-is-a-myth-but-winter-blues-are-real-how-to-cope-in-the-cold-months-and-winter-well-272882

South Africa’s addressing system is still not in place: a clear vision is needed

Source: The Conversation – Africa (2) – By Sharthi Laldaparsad, PhD Student, University of Pretoria

Informal settlement in South Africa. By Matt-80 – Own work, CC BY 2.0, Wikimedia Commons, CC BY

“Turn right after the first big tree; my house is the one with the yellow door.” In parts of South Africa, where settlements have grown without formal urban planning due to rapid urbanisation, that could well be a person’s “address”.

Having an address has many purposes. Not only does it allow you to find a place or person you want to visit, it’s compulsory in South Africa to provide an address when opening a bank account and registering as a voter in elections. Address locations are used to plan the delivery of services such as electricity or refuse removal and health services at clinics or education at schools. Police and health workers need addresses in emergencies.

Nowadays, address data is integrated and maintained in databases at municipalities, banks and utility providers, and used to analyse targeted interventions and developmental outcomes. Examples would be tracking the spread of communicable diseases, voter registration or service delivery trends.

South Africa has had national address standards since 2009 to make it easier to assign addresses that work in multiple systems, and to share the data. But the standards are not enforced, so the struggle with addressing persists. There is still no authoritative register of addresses in South Africa, and it’s not clear who is responsible for the governance of address data.

We work in geography and geoinformatics, an interdisciplinary field to do with collecting, managing and analysing geographical information. We recently turned to a neglected source to explore the issue of addresses: the people in government and business who actually use the information. We wanted to explore what they said about whose job it is to give everyone an address, how the data is maintained and what’s standing in the way of doing this.

Our research took a qualitative approach. We interviewed stakeholders to get their unique insights and daily experiences about what addresses are used for, how they are used, challenges that are experienced and how these are overcome. We spoke to 21 respondents across different levels of government with in-depth experience of projects, in both urban and rural settlements, as well as private companies that collect,
integrate and provide address data and related services.

Our main finding was that there’s no clear vision of future address systems, or leadership on the issue. Without agreement on whether there is a problem, or whose problem it is, a resolution isn’t possible.

Categories of addresses

First we collected all the different purposes of addresses and systematically categorised them. The main categories were:

  • finding an object (for example, for postal deliveries)

  • service delivery (such as electricity)

  • identity (for example, for citizenship)

  • common reference (for example, use in a voter register or in a pandemic).

The broad spectrum of address purposes suggests that addresses are essential to society, governance and the economy in a modern world.

So what’s standing in the way of better address coverage?

Need for governance: The interviews confirmed that stakeholders need clear rules, regulations, processes and structures to guide decisions, allocate resources and ensure accountability about addresses and address data. Most of the respondents considered addresses to be necessary for socio-economic development.




Read more:
‘Walk straight’: how small-town residents navigate without street signs and names


Leadership: These responses suggest that the societal problem of addressing is not (yet) clearly identified and defined. That makes it difficult to determine who should legitimately resolve the problem, for whom and how.

Interviewees raised concerns about leadership and vision at different levels of government affecting the country’s ability to solve the address issue. They agreed that the task had not been assigned to municipalities, which have many other pressing priorities and limited resources. The South African Post Office could play a role. But it has been placed in business rescue.

Adapting to gaps: In this constrained environment, stakeholders resort to short-term “fixes” that don’t have systemic impact. For example, some municipalities assign numbers to dwellings based on aerial photography, or barcodes on dwellings, or only locate the main assembly points in their jurisdiction, to fulfil their own responsibilities. So nothing changes: addresses and address data are incomplete and of poor quality.

Respondents also made suggestions.

Some questioned whether addresses were needed at all. They said there were other ways of finding a house or a business, such as navigating to a coordinate shared via Google Maps, or using verbal directions.

Some suggested that the uncertainty about responsibilities could be an opportunity for the private sector. It is already collecting address information from various sources like municipalities, then standardising, integrating and making available address data and related services, at a cost.

However, as is the case with many other services in the country, rural areas may be left behind where there is no economic incentive. Access to private data becomes unaffordable for government and society at large.

Ending the aimlessness

The deficiencies and adaptations in South Africa suggest that addressing is in a state of aimlessness.

How to fix the problem will require a number of interventions.

Firstly, there need to be decisions, actions and institutional commitments towards long-term strategies that will stop the drift. For example, cities and municipalities should strive for full coverage of addresses. They should also improve the quality and standardisation of the data, so that they are more useful.

Secondly, there’s a need for innovation and investment to transform and strengthen the governance of the country’s addressing infrastructure. For example, the European Commission recommends e-government based on a set of interlinked registers for property, addresses, people, business and vehicles.

Thirdly, data collection platforms and databases should be designed with the understanding that different types of addresses are in use – it could be a street name and number, or an informal description. Different types of addresses should have equal validity or credibility.




Read more:
South Africa needs a national database of addresses: how it could be done


At a more technical level, address metadata (information about the data) should make it possible for different systems to use it.

Addresses connect us to society – locally to our community and globally to the rest of the world. Addresses are essential for socio-economic growth and good governance in cities and municipalities.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. South Africa’s addressing system is still not in place: a clear vision is needed – https://theconversation.com/south-africas-addressing-system-is-still-not-in-place-a-clear-vision-is-needed-268135

Racial profiling by ICE agents mirrors the targeting of Japanese Americans during World War II

Source: The Conversation – USA – By Anna Storti, Assistant Professor of Gender, Sexuality, and Feminist Studies and Asian American Studies, Duke University

A Japanese American family is taken to a relocation center in San Francisco in May 1942. Circa Images/GHI/Universal History Archive/Universal Images Group via Getty Images

The Department of Homeland Security in September 2025 said that 2 million undocumented immigrants had been forced out of the United States since the start of Donald Trump’s second presidency.

Through its use of the Alien Enemies Act, a wartime law from 1798, the Trump administration has bypassed immigration courts and the right to due process to more easily detain and deport immigrants.

The Trump administration has, in part, reached these numbers by arresting immigrants in courthouses and at their workplaces. It has also conducted raids in schools, hospitals and places of worship.

And the Supreme Court in September, in its Noem v. Vasquez Perdomo decision, lifted a federal court order that barred agents with Immigration and Customs Enforcement from racially profiling suspected undocumented immigrants. For now, ICE agents can use race, ethnicity, language and occupation as grounds for stopping and questioning people.

This form of targeting has disproportionately affected Latino communities, which represent 9 in 10 ICE arrests, according to a UCLA study published in October.

Targeting immigrants is a centuries-old American practice. In particular, Asian Americans have drawn parallels between the attacks on Latinos today and the forced relocation and incarceration of Japanese Americans during World War II.

Notably, the passage of the War Brides Act, passed just three months after the end of WWII, in December 1945, broke with the nation’s centuries-long practice of exclusionary immigration policy. The act allowed American servicemen to bring their non-American spouses and children to the United States. The measure seemed to inaugurate a new era of inclusive immigration policy.

As a feminist studies scholar and author, I know the War Brides Act forever altered the nation’s racial demographics, increasing both Asian migration to the U.S. and the birth of biracial children.

On the 80th anniversary of the War Brides Act, I’ve also noticed an alarming contradiction: Although America may be more multiracial than ever before, the U.S. immigration system remains as exclusive as it has ever been.

Exclusionary immigration policy

The racial profiling of Latino people by ICE agents today is not unlike what took place during World War II in the U.S.

Following Japan’s attack on Pearl Harbor in 1941, President Franklin D. Roosevelt issued an executive order authorizing the forced removal of anyone deemed to be a national security threat. Anyone, that is, who was Japanese. From 1942 to 1945, the U.S. government incarcerated approximately 120,000 Japanese Americans in internment camps.

To determine who was a national security threat, the government used overt racial profiling. Similar to today, when the U.S. government often misidentifies Latino Americans as noncitizens, a majority of the Japanese people incarcerated in WWII were U.S. citizens.

Amid the Trump administration’s treatment of immigrants, it’s worth recalling the exclusionary origins of U.S. immigration policy.

The first restrictive immigration law in the U.S., the Page Act of 1875, barred Chinese women from entering the country. The assumption the law was based on was that all Chinese women were immoral and worked in the sex trade.

A soldier holds a rifle on a city street.
Immigration and Customs Enforcement agents conduct operations in a predominantly Mexican American community in Chicago on Nov. 8, 2025.
Jacek Boczarski/Anadolu via Getty Images

The Page Act laid the groundwork for the Chinese Exclusion Act of 1882, which banned all Chinese immigration into the U.S. for 10 years. This was the first federal law to ban an entire ethnic group, launching an era of legalized and targeted exclusion.

With the passage of the Immigration Act of 1924, the U.S. created its first border control service, which enforced new immigration restrictions. It also implemented a quota system, which banned or limited the number of immigrants from specific regions, including Asia and Southern and Eastern Europe.

The act stemmed from nativism – the policy that protects the interests of native-born residents against those of immigrants – and a desire to preserve American homogeneity.

The 1945 War Brides Act largely diverged from these previous measures, helping to dismantle the Asian exclusion made commonplace in the 19th and early 20th centuries. From 1945 until 1948, when the War Brides Act expired, more than 300,000 people entered the country as nonquota immigrants, people from countries not subject to federal immigration restrictions.

Exclusionary tendencies

Decades later, in 1965, the U.S. formally abolished the quota system. America opened its doors to those who President Lyndon B. Johnson deemed most able to contribute to the nation’s growth, particularly skilled professionals.

The Immigration and Nationality Act of 1965 eliminated racial exclusion. As a result, the U.S. population diversified. Immigrants deepened the multiracialism initiated by the War Brides Act.

This trend increased later in the 1960s when the Supreme Court, in Loving v. Virginia, overturned anti-miscegenation laws, which criminalized marriage between people of different races. The justices ruled that laws banning interracial marriage violated the 14th Amendment.

Multiracialism further increased after the Vietnam War. Subsequent legislation such as the 1987 Amerasian Homecoming Act facilitated the entry of biracial children born in Vietnam and fathered by a U.S. citizen.

Japanese-Americans arrive at a train station.
People of Japanese ancestry arrive at the Santa Anita Assembly Center in California before being moved inland to relocation centers, April 5, 1942.
© CORBIS/Corbis via Getty Images

By the 1960s, however, exclusion was taking on a different shape.

After 1965, immigration policy initiated a preference system that prioritized skilled workers and relatives of U.S. citizens. Quotas related to race and national origin were abolished. Nonetheless, preferences for families and professionals excluded people from Latin America.

For the first time, immigration from the Western Hemisphere was limited. This directly affected migrant workers in the farming and agricultural industries, many of whom were Latino.

Recalling the War Brides Act allows Americans to better comprehend the fiction that undergirds the U.S. immigration system: that immigration policy’s preference for certain immigrants is enough to justify the discriminatory policies which deem some families more valuable than others.

The Conversation

Anna Storti has received funding from the Institute for Citizens and Scholars, the Andrew W. Mellon Foundation, and the McNair Scholars Program.

ref. Racial profiling by ICE agents mirrors the targeting of Japanese Americans during World War II – https://theconversation.com/racial-profiling-by-ice-agents-mirrors-the-targeting-of-japanese-americans-during-world-war-ii-271612

The western US is in a snow drought – here’s how a storm made it worse

Source: The Conversation – USA (2) – By Alejandro N. Flores, Associate Professor of Geoscience, Boise State University

Skiers and snowboarders walk across dry ground to reach a slope at Bear Mountain ski resort on Dec. 21, 2025, in California. Eric Thayer/Los Angeles Times via Getty Images

Much of the western U.S. has started 2026 in the midst of a snow drought. That might sound surprising, given the record precipitation from atmospheric rivers hitting the region in recent weeks, but those storms were actually part of the problem.

To understand this year’s snow drought – and why conditions like this are a growing concern for western water supplies – let’s look at what a snow drought is and what happened when atmospheric river storms arrived in December.

A chart shows very low snowpack in 2025 compared to average.

Chart source: Rittiger, K., et al., 2026, National Snow and Ice Data Center, CC BY

What is a snow drought?

Typically, hydrologists like me measure the snowpack by the amount of water it contains. When the snowpack’s water content is low compared with historical conditions, you’re looking at a snow drought.

A snow drought can delayed ski slope opening dates and cause poor early winter recreation conditions.

It can also create water supply problems the following summer. The West’s mountain snowpack has historically been a dependable natural reservoir of water, providing fresh water to downstream farms, orchards and cities as it slowly melts. The U.S. Geological Survey estimates that up to 75% of the region’s annual water supply depends on snowmelt.

A map shows much of the West, with the exceptions of the Sierra Nevada and northern Rockies, with snowpack less than 50% of normal.
Snowpack is typically measured by the amount of water it contains, or snow water equivalent. The numbers show each location’s snowpack compared to its average for the date. While still early, much of the West was in snow drought as 2026 began.
Natural Resources Conservation Service

Snow drought is different from other types of drought because its defining characteristic is lack of water in a specific form – snow – but not necessarily the lack of water, per se. A region can be in a snow drought during times of normal or even above-normal precipitation if temperatures are warm enough that precipitation falls as rain when snow would normally be expected.

This form of snow drought – known as a warm snow drought – is becoming more prevalent as the climate warms, and it’s what parts of the West have been seeing so far this winter.

How an atmospheric river worsened the snow drought

Washington state saw the risks in early December 2025 when a major atmospheric river storm dumped record precipitation in parts of the Pacific Northwest. Up to 24 inches fell in the Cascade Mountains between Dec. 1 and Dec. 15. The Center for Western Weather and Water Extremes at Scripps Oceanographic Institute documented reports of flooding, landslides and damage to several highways that could take months to repair. Five stream gauges in the region reached record flood levels, and 16 others exceeded “major flood” status.

Yet, the storm paradoxically left the region’s water supplies worse off in its wake.

The reason was the double-whammy nature of the event: a large, mostly rainstorm occurring against the backdrop of an uncharacteristically warm autumn across the western U.S.

Water fills a street over the wheels of cars next to a river.
Vehicles were stranded as floodwater in a swollen river broke a levee in Pacific, Wash., in December 2025.
Brandon Bell/Getty Images

Atmospheric rivers act like a conveyor belt, carrying water from warm, tropical regions. The December storm and the region’s warm temperatures conspired to produce a large rainfall event, with snow mostly limited to areas above 9,000 feet in elevation, according to data from the Center for Western Weather and Water Extremes.

The rainfall melted a significant amount of snow in mountain watersheds, which contributed to the flooding in Washington state. The melting also decreased the amount of water stored in the snowpack by about 50% in the Yakima River Basin over the course of that event.

As global temperatures rise, forecasters expect to see more precipitation falling as rain in the late fall and early spring rather than snow compared with the past. This rain can melt existing snow, contributing to snow drought as well as flooding and landslides.

What’s ahead

Fortunately, it’s still early in the 2026 winter season. The West’s major snow accumulation months are generally from now until March, and the western snowpack could recover.

More snow has since fallen in the Yakima River Basin, which has made up the snow water storage it lost during the December storm, although it was still well below historical norms in early January 2026.

Scientists and water resource managers are working on ways to better predict snow drought and its effects several weeks to months ahead. Researchers are also seeking to better understand how individual storms produce rain and snow so that we can improve snowpack forecasting – a theme of recent work by my research group.

As temperatures warm and snow droughts become more common, this research will be essential to help water resources managers, winter sports industries and everyone else who relies on snow to prepare for the future.

The Conversation

Alejandro N. Flores receives funding from the National Science Foundation, US Department of Energy, NASA, USDA Agricultural Research Service, and Henry’s Fork Foundation.

ref. The western US is in a snow drought – here’s how a storm made it worse – https://theconversation.com/the-western-us-is-in-a-snow-drought-heres-how-a-storm-made-it-worse-272549

Taming the moral menace at capitalism’s core

Source: The Conversation – USA (2) – By Valerie L. Myers, Organizational Psychologist and Lecturer in Management and Organizations, University of Michigan

Digital disruption and the climate crisis are often framed as economic or social challenges. But they force crucial moral questions. Who will be held accountable for the human cost? What will it take to transform business culture so that those costs are not treated as inevitable and acceptable?

In my view, the answers will shape not only technology’s impact on humanity and the planet but the moral foundations of democracy itself.

As a management professor who studies the calling ethic – the idea that work can be guided by principles and moral duty – I think this moment is best understood as a contest between two recurring leadership patterns.

One pattern rationalizes exploitation and disguises harm as the price of progress. Drawing on Yale law professor James Whitman’s use of the phrase “moral menace,” I use it here to name this recurring force.

In contrast, some leaders show how it’s possible to pursue principles and profits together. I call such people “moral muses”: leaders whose care and fairness promote flourishing.

The contrast is stark: Menaces dominate. Muses cultivate.

I contend the menace often wins not because it’s right, but because its practices have hardened into management orthodoxy about how to treat people. Yet its dominance can be disrupted by tracing the menace’s ancient roots and, like muses throughout history, learning how to tame it.

The menace: Normalized callousness

The menace isn’t just about greed. It’s a system of cruelty rooted in ancient Roman property law, in which wives, children, enslaved people and animals were treated as possessions and subject to abuses, including violence at the owner’s will. Whitman traces how this legal foundation evolved into a broader moral menace that became a durable template in Western capitalism that was repeatedly reproduced.

Building on that concept, I would argue that the menace adapted and became normalized in business management – from institutional alliances to empire, to everyday practices.

A pivotal development in institutionalized commercial cruelty began in the 15th century, when papal decrees gave religious sanction to menacing conquests – campaigns of land seizure, enslavement and labor theft. Contemporary accounts speak to the cruelty and exploitation that were pillars of economies of the time.

By the 17th century, Dutch traders outpaced their Spanish rivals in turning menace into efficiency. The richest 1% sent sailors on deadly voyages to amass fortunes, while leaving their fellow citizens among the poorest in Europe. Researchers studying this period, sometimes known as the Dutch Golden Age, wrote, “We did not expect to find the ‘pioneers of capitalism’ in the cradle of civil society to have been so stingy.”

Abroad, traders pioneered accounting, logistics and labor-control methods that maximized profit by brutalizing enslaved workers. Historian Caitlin Rosenthal shows how plantation owners refined these methods, the British perfected them, and Americans institutionalized them.

Once normalized, inhumanity – recast as efficiency – arguably became the defining logic of modern management: extracting ever more output to enrich owners, regardless of the human toll. Financial journalists have called this the “dark side of efficiency.” Yet the menace has a cultural halo: Popular TV series like “Billions” and “Yellowstone” valorize exploitation, dominance and dark tetrad tendencies like Machiavellianism.

Studies show that this celebrated style produces lackluster results. Is it any wonder that only 31% of employees report feeling engaged at work?

Even so, the menace has never gone unchallenged. At every stage of its advance, muses have resisted – insisting that fairness and care prevail.

The muse: Transforming institutions of menace

Throughout history, muses have done more than resist the menace; they’ve sought to transform the very institutions that sustained it. Driven by principle, their disruptive actions bent institutions toward more humane and ethical practices – even as the menace adapted to survive.

One early muse-like figure is Martin Luther, who in 1524 sparked a revolution by challenging the church’s influence on commerce. In “Trade and Usury” he condemned “unneighborly” and deceptive business practices, insisting that trade must be guided by law and conscience rather than greed. (In time, of course, Protestants themselves used religion to justify slavery and domination – a reminder that the menace reinvents itself when challenged.)

In the 18th century, American founder and businessman Gouverneur Morris advanced the muse struggle by reimagining power in the new nation. At the Constitutional Convention, he warned that “the rich will strive to establish their dominion and enslave the rest” unless restrained by law. He enshrined limits on elite domination and elevated civic principles in the preamble to the Constitution: justice, union, tranquility and general welfare. Over the centuries, other business and policy leaders advanced the ethic.

More recently, Marriott International illustrates how profitable firms operate by muse principles without sacrificing profits. Since its 1927 founding, Marriott valued “putting people first.” In 2010, Chief Global Human Resources Officer David Rodriguez institutionalized this value with the Take Care initiative. In response to the 2020 global pandemic response, under the stewardship of the late Arne Sorenson, it expanded to “Project We Care.” Due in part to its commitments, Marriott had less than half the losses of U.S. peers Hilton and Hyatt.

Empirical studies confirm what Marriott’s leaders modeled: Servant leaders generate stronger employee commitment and performance than charismatic or transformational leaders.

Notably, muse leaders typically aim at intermediate targets – reforming institutions and governance to constrain the menace. But since management itself is built on menace foundations, transformation at scale will require a critical mass of moral muses in business.

Mobilizing moral muses

One-off reforms like family-friendly policies, ESG targets and civility pledges are useful, but they cannot uproot centuries of menace. What’s required is a critical mass of moral muses who refuse to rationalize harm as progress and who lead a culture reset in guiding business logic.

That means uprooting institutionalized callousness and redefining what counts as efficiency, innovation and value. It also means enacting civic principles of care and common good, as Morris envisioned, and amplifying leaders who prove that compassion and profitability can reinforce each other.

History shows that muses are not anomalies, and their stories are instructive for us now. Across eras, they have demonstrated that prioritizing human dignity fosters trust, prosperity and social vitality. But their stories are too often buried or ignored – not by accident, but because they threaten those who profit from menace. Without sustained institutional redesign, the menace reliably reasserts itself under new moral guises.

Reclaiming and amplifying muse stories is essential for transformation. They aren’t just anecdotes of resistance; they are blueprints for a more humane and sustainable capitalism.

The Conversation

Valerie L. Myers does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Taming the moral menace at capitalism’s core – https://theconversation.com/taming-the-moral-menace-at-capitalisms-core-266744

Illness is more than just biological – medical sociology shows how social factors get under the skin and cause disease

Source: The Conversation – USA (3) – By Jennifer Singh, Associate Professor of Sociology, Georgia Institute of Technology

Lack of access to safe and affordable housing is harmful to health. Robert Gauthier/Los Angeles Times via Getty Images

Health and medicine is more than just biological – societal forces can get under your skin and cause illness. Medical sociologists like me study these forces by treating society itself as our laboratory. Health and illness are our experiments in uncovering meaning, power and inequality, and how it affects all parts of a person’s life.

For example, why do low-income communities continue to have higher death rates, despite improved social and environmental conditions across society? Foundational research in medical sociology reveals that access to resources like money, knowledge, power and social networks strongly affects a person’s health. Medical sociologists have shown that social class is linked to numerous diseases and mortality, including risk factors that influence health and longevity. These include smoking, overweight and obesity, stress, social isolation, access to health care and living in disadvantaged neighborhoods.

Moreover, social class alone cannot explain such health inequalities. My own research examines how inequalities related to social class, race and gender affect access to autism services, particularly among single Black mothers who rely on public insurance. This work helps explain delays in autism diagnosis among Black children, who often wait three years after initial parent concerns before they are formally diagnosed. White children with private insurance typically wait from 9 to 22 months depending on age of diagnosis. This is just one of numerous examples of inequalities that are entrenched in and deepened by medical and educational systems.

Medical sociologists like me investigate how all of these factors interact to affect a person’s health. This social model of illness sees sickness as shaped by social, cultural, political and economic factors. We examine both individual experiences and societal influences to help address the health issues affecting vulnerable populations through large-scale reforms.

By studying the way social forces shape health inequalities, medical sociology helps address how health and illness extend beyond the body and into every aspect of people’s lives.

Protesters standing in front of a federal building, holding signs in the shape of graves reading '16 MILLION LIVES' and 'R.I.P. DEATH BY A THOUSAND CUTS,' wearing shirts that read 'MEDICAID SAVES LIVES'
Access to health insurance is a political issue that directly affects patients. Here, care workers gathered in June 2025 to protest Medicaid cuts.
Tasos Katopodis/Getty Images for SEIU

Origins of medical sociology in the US

Medical sociology formally began in the U.S after World War II, when the National Institutes of Health started investing in joint medical and sociological research projects. Hospitals began hiring sociologists to address questions like how to improve patient compliance, doctor-patient interactions and medical treatments.

However, the focus of this early work was on issues specific to medicine, such as quality improvement or barriers to medication adherence. The goal was to study problems that could be directly applied in medical settings rather than challenging medical authority or existing inequalities. During that period, sociologists viewed illness mostly as a deviation from normal functioning leading to impairments that require treatment.

For example, the concept of the sick role – developed by medical sociologist Talcott Parsons in the 1950s – saw illness as a form of deviance from social roles and expectations. Under this idea, patients were solely responsible for seeking out medical care in order to return to normal functioning in society.

In the 1960s, sociologists began critiquing medical diagnoses and institutions. Researchers criticized the idea of the sick role because it assumed illnesses were temporary and did not account for chronic conditions or disability, which can last for long periods of time and do not necessarily allow people to deviate from their life obligations. The sick role assumed that all people have access to medical care, and it did not take into account how social characteristics like race, class, gender and age can influence a person’s experience of illness.

Patient wearing surgical mask sitting in chair of exam room, talking to a doctor
Early models of illness in medical sociology discounted the experience of the patient.
Paul Bersebach/MediaNews Group/Orange County Register via Getty Images

Parsons’ sick role concept also emphasized the expertise of the physician rather than the patient’s experience of illness. For example, sociologist Erving Goffman showed that the way care is structured in asylums shaped how patients are treated. He also examined how the experience of stigma is an interactive process that develops in response to social norms. This work influenced how researchers understood chronic illness and disability and laid the groundwork for later debates on what counts as pathological or normal.

In the 1970s, some researchers began to question the model of medicine as an institution of social control. They critiqued how medicine’s jurisdiction expanded over many societal problems – such as old age and death – which were defined and treated as medical problems. Researchers were critical of the tendency to medicalize and apply labels like “healthy” and “ill” to increasing parts of human existence. This shift emphasized how a medical diagnosis can carry political weight and how medical authority can affect social inclusion or exclusion.

The critical perspective aligns with critiques from disability studies. Unlike medical sociology, which emerged through the medical model of disease, disability studies emerged from disability rights activism and scholarship. Rather than viewing disability as pathological, this field sees disability as a variation of the human condition rooted in social barriers and exclusionary environments. Instead of seeking cures, researchers focus on increasing accessibility, human rights and autonomy for disabled people.

A contemporary figure in this field was Alice Wong, a disability rights activist and medical sociologist who died in November 2025. Her work amplified disabled voices and helped shaped how the public understood disability justice and access to technology.

Structural forces shape health and illness

By focusing on social and structural influences on health, medical sociology has contributed significantly to programs addressing issues like segregation, discrimination, poverty, unemployment and underfunded schools.

For example, sociological research on racial health disparities invite neighborhood interventions that can help improve overall quality of life by increasing the availability of affordable nutritious foods in underserved neighborhoods or initiatives that prioritize equal access to education. At the societal level, large-scale social policies such as guaranteed minimum incomes or universal health care can dramatically reduce health inequalities.

People carrying boxes of food under a tent
Access to nutritious food is critical to health.
K.C. Alfred / The San Diego Union-Tribune via Getty Images

Medical sociology has also expanded the understanding of how health care policies affect health, helping ensure that policy changes take into account the broader social context. For example, a key area of medical sociological research is the rising cost of and limited access to health care. This body of work focuses on the complex social and organizational factors of delivering health services. It highlights the need for more state and federal regulatory control as well as investment in groups and communities that need care the most.

Modern medical sociology ultimately considers all societal issues to be health issues. Improving people’s health and well-being requires improving education, employment, housing, transportation and other social, economic and political policies.

The Conversation

Jennifer Singh does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Illness is more than just biological – medical sociology shows how social factors get under the skin and cause disease – https://theconversation.com/illness-is-more-than-just-biological-medical-sociology-shows-how-social-factors-get-under-the-skin-and-cause-disease-270258