5 ways students can think about learning so that they can learn more − and how their teachers can help

Source: The Conversation – USA (2) – By Jerrid Kruse, Professor of Science Education, Drake University

Learning is more than just memorization. FG Trade/E+ via Getty Images

During my years teaching science in middle school, high school and college, some of my students have resisted teaching that educators call higher-order thinking. This includes analysis, creative and critical thinking, and problem-solving.

For example, when I asked them to draw conclusions from data or generate a process for testing an idea, some students replied, “Why don’t you tell us what to do?” or “Isn’t it the teacher’s job to tell us the right answers?”

In other words, my students had developed a strong preconceived notion that knowledge comes from authority. After investigating, my colleagues and I concluded that these beliefs about learning were influencing how they approached our lessons – and thus what they were able to learn.

All students come to class with a range of beliefs about what it means to learn. In the field of education, perhaps the most sought-after belief is what we call having a growth mindset. Students with a growth mindset believe they can improve and continue to learn. In contrast, students with a fixed mindset struggle to believe they can become more knowledgeable about the topic they’re studying. When students say, “I’m bad at math,” they exhibit a fixed mindset.

As teachers, we not only try to help students understand the topic at hand but also aim to instill accurate beliefs about learning so nothing interferes with their ability to take in new information.

Other than the growth mindset, I argue that five other beliefs are particularly important to promote in classrooms to help students become better learners and more prepared for the modern world.

Learning is understanding

Some students and teachers equate learning to memorizing.

While memorization has a role in learning, deep learning is about understanding. Students will be well served recognizing that learning is about explaining and connecting concepts to make meaning.

Too much focus on memorizing can hide gaps in learning.

For example, I was once working with a preschool student when they proudly demonstrated their ability to recite the numbers 1 through 20. I then asked the student to count the pencils on the desk. The student did not understand my request. They had not connected these new words to the number concept.

To help students recognize the importance of understanding for learning, teachers and parents might engage students in questions such as, “Why is connecting a new idea to an old idea better than just trying to memorize the answer?” or “Why is an explanation more useful than just an answer?”

a young girl sitting at a desk buries her forehead in a textbook
Learning is hard.
demaerre/iStock via Getty Images

Learning is complex and requires challenge

Students’ belief that learning is akin to memorization may reflect a related belief that knowledge is simple and learning should be easy.

Instead, educators want students to embrace complexity and its challenges. Through wrestling with nuance and complexity, students engage in the mental effort required to form and reinforce new connections in their thinking.

When students believe knowledge is simple and learning should be easy, their engagement in higher-order thinking, which is required to embrace complexity and nuance, suffers.

To help students who are struggling grasp a complex idea, teachers and parents might ask questions that help students see why learning is complex and requires challenge.

Learning takes time

When students believe learning is simple and easy, educators should not be surprised they think learning should be fast as well.

Instead, students ought to understand that deep learning takes time. If students believe learning is quick, they are less likely to seek challenge, explore nuance or reflect and make connections among ideas. Unfortunately, many curricula pack so much intended learning into a short amount of time that beliefs in quick learning are subtly reinforced.

While teachers can get creative with curricular materials — and spend more time challenging students to explore complexity and make connections — just spending more time on a concept may not be enough to shift a student’s beliefs about learning.

To help students shift their thinking about the speed of learning, I ask them to discuss questions such as, “Why do you think understanding complex concepts takes so much time?” or “Why would only covering this concept for one lesson not be enough?” With these questions, my colleagues and I have found students start to recognize that deep learning is slow and takes time.

Learning is ongoing

Students should also recognize that learning doesn’t end.

Unfortunately, many students believe learning to be a destination rather than an ongoing process. Yet, because knowledge contains an inherent level of uncertainty, and increased learning often reveals increased complexity, learning must be continuous.

To help students reflect on this belief, teachers and parents might ask their students, “How do you think your knowledge has changed over time?” and “How do you think your learning will change in the future?”

a white man stands facing away from the camera toward students at row desks
Learning doesn’t come only from teachers at the front of a class.
Drazen Zigic/iStock via Getty Images

Learning is not only from teachers

I remember one high school student telling me that “teachers are supposed to tell us the answers, so we know what to put on the test.”

This student had apparently figured out the “rules of the game” and was not happy when their teacher was trying to engage them in higher-order thinking. This student was holding onto a transmission model of learning in which learning comes from authority figures.

Instead, students should recognize that learning comes from many sources, including their experiences, their peers and their own thinking, as well as from authority figures.

While teachers and parents may hesitate to undermine their own authority, they do students a disservice when they do not prepare them to question and go beyond authority figures.

To help students shift their thinking, teachers might ask students to consider, “Why might learning from multiple sources help you better understand the complexity and nuance of a concept?”

Building better beliefs about learning

Often, teachers and parents believe opportunities to engage in higher-order thinking are enough to help their students develop better beliefs about learning.

But such beliefs require explicit attention and must be planned for in lessons. This is done by asking reflective questions that target specific beliefs, such as the questions noted in the final sentence of each of the previous sections.

In my experience, the conversations I’ve had with students using the questions noted above are highly engaging. Moreover, helping kids develop more robust beliefs about learning just might be the most important thing teachers can do to prepare students for the future.

The Conversation

Jerrid Kruse receives funding from the National Science Foundation, the NASA Iowa Space Grant Consortium, and the William G. Stowe Foundation.

ref. 5 ways students can think about learning so that they can learn more − and how their teachers can help – https://theconversation.com/5-ways-students-can-think-about-learning-so-that-they-can-learn-more-and-how-their-teachers-can-help-244619

Suprématie du dollar : les tarifs douaniers de Trump sur l’Inde pourrait la fragiliser

Source: The Conversation – France (in French) – By Sambit Bhattacharyya, Professor of Economics, University of Sussex Business School, University of Sussex

Au-delà de l’économie, la politique tarifaire de Donald Trump s’affirme comme un levier de diplomatie aux répercussions géopolitiques considérables. L’imposition de droits de douane de 50 % à l’Inde, alliée stratégique des États-Unis au sein du Quad, menace non seulement les échanges bilatéraux, mais risque aussi de rapprocher New Delhi de la Russie et de la Chine, de renforcer la cohésion des BRICS et de fragiliser la primauté du dollar sur la scène mondiale.


La politique tarifaire de Donald Trump semble être devenue autant un outil de politique étrangère qu’une stratégie économique. Mais la décision de l’administration d’imposer des droits de douane de 50 % à l’Inde, un allié clé des États-Unis dans le cadre du Quad (dialogue quadrilatéral pour la sécurité) — le groupe de coopération militaire et diplomatique informelle entre les États-Unis, l’Inde, le Japon et l’Australie — pourrait avoir des répercussions importantes, non seulement sur le commerce international, mais aussi sur la géopolitique mondiale.

La justification américaine de cette hausse des droits de douane est avant tout politique. La Maison Blanche affirme que l’Inde a tiré profit de l’achat et de la revente de pétrole russe, au mépris des sanctions imposées après l’invasion de l’Ukraine en 2022. Cela a aidé la Russie à surmonter les effets des sanctions et à continuer de financer sa guerre en Ukraine.

Il est évident que la politique tarifaire et les déclarations récentes de Washington et de New Delhi ont gravement détérioré une relation bilatérale encore naissante. À tel point que le premier ministre indien, Narendra Modi, a refusé de répondre aux appels téléphoniques de Trump. De son côté, Trump ne prévoit plus de se rendre en Inde pour le sommet du Quad prévu plus tard dans l’année.

Le premier ministre indien, Narendra Modi, a participé au sommet de l’Organisation de coopération de Shanghai (OCS) à Tianjin, en Chine, du 31 août au 1er septembre, en compagnie du président russe Vladimir Poutine. Les trois dirigeants ont été photographiés ensemble en pleine discussion cordiale et M. Modi a rencontré séparément MM. Xi et Poutine en marge du sommet, présenté comme une alternative à l’ordre hégémonique dominé par les États-Unis.

Il apparaît désormais évident que la hausse des droits de douane américains ne détournera pas l’Inde de ses achats de pétrole russe. Bien au contraire, Modi a confirmé la volonté de son pays non seulement de maintenir ces importations, mais aussi de les accroître.

Rien d’étonnant à cela : la posture de l’Inde à l’égard de la Russie, en tant qu’importateur net de pétrole brut, relève moins d’une ambition géopolitique d’envergure que d’une nécessité économique concrète, celle de maîtriser l’inflation.

Sur le plan énergétique, l’Inde reste

très dépendante des importations, et sa population — majoritairement pauvre et vulnérable — a besoin de prix stables et abordables. Aucune pression venue des États-Unis ou de leurs alliés du G7 ne saurait modifier cette réalité économique fondamentale.

Le revers américain fait le jeu de Moscou

L’instauration des droits de douane américains risque de réduire les exportations indiennes de vêtements et de chaussures vers les États-Unis, les grandes marques occidentales se tournant vers des fournisseurs moins coûteux dans d’autres pays. Une telle dynamique se traduirait par une augmentation des prix pour les consommateurs américains.

Cependant, l’impact sur les fournisseurs indiens devrait rester limité, la demande mondiale en vêtements et chaussures demeurant très élevée. Ils pourraient aisément se tourner vers d’autres marchés.

Les pierres précieuses représentent un autre pilier des exportations indiennes, où le pays détient une position dominante à l’échelle mondiale. Les droits de douane américains ne devraient pas modifier sensiblement cette situation, l’Inde disposant de nombreux débouchés à l’exportation, bien que les États-Unis figurent parmi ses principaux clients.

Le renforcement des échanges commerciaux entre l’Inde et la Russie devrait favoriser de nouvelles opportunités d’investissements réciproques. Pour la Russie, la conjoncture économique pourrait globalement s’améliorer à la suite de ces droits de douane. L’Inde a d’ailleurs laissé entendre qu’elle augmenterait probablement ses importations de pétrole, tandis que la Russie profiterait d’achats de vêtements et de chaussures à prix compétitifs en provenance d’Inde, les fournisseurs indiens cherchant à rediriger leurs exportations vers de nouveaux débouchés.

Le renforcement des relations économiques avec l’Inde, qui ambitionne de porter les échanges bilatéraux à

100 milliards de dollars américains

(92 milliards d’euros) d’ici 2030, offrira à la Russie un important marché alternatif à la Chine pour écouler ses produits. Elle y gagnera également un fournisseur majeur de biens de consommation, habituellement importés, contribuant ainsi à maintenir des prix abordables pour les ménages russes.

La fin de la primauté du dollar américain ?

L’Occident court le risque que, si les tensions tarifaires se traduisent par des sanctions financières plus strictes, les investissements indiens se détournent des États-Unis et des pays du G7 au profit de la Russie et de la Chine. Les investisseurs indiens sont actuellement [très présents]

(https://qz.com/half-of-40-billion-indian-fdi-in-us-in-2-sectors-1850407567) dans les secteurs de l’automobile, de la pharmacie, des technologies de l’information et des télécommunications en Occident, mais ces flux pourraient être redirigés vers d’autres marchés.

On observe de plus en plus de signes d’une cohésion renforcée, non seulement au sein de l’OCS, mais également au sein du groupe des BRICS, qui regroupe un nombre croissant de nations commerçantes. Initialement composé des membres fondateurs — le Brésil, la Russie, l’Inde, la Chine et l’Afrique du Sud — le groupe s’est récemment élargi pour inclure l’Égypte, l’Éthiopie, l’Iran, l’Indonésie et les Émirats arabes unis.

Ces économies en pleine croissance s’efforcent déjà de mettre en place des mécanismes techniques pour les investissements mutuels et les règlements commerciaux dans leurs monnaies locales plutôt qu’en dollars américains.

Les chocs commerciaux mondiaux provoqués par l’imposition de droits de douane par les États-Unis ont entraîné une baisse à court terme de la valeur du dollar américain. Si cette dépréciation reste modeste d’un point de vue historique, elle masque néanmoins un risque plus important à long terme.

Le problème ne concerne pas les transactions commerciales, qui ne constituent qu’une part marginale des opérations en dollars. Les risques à long terme résident plutôt dans une possible diminution du rôle du dollar dans la gestion d’actifs, l’investissement, les activités financières et les réserves internationales.

En particulier, le rôle quasi exclusif du dollar comme monnaie de réserve pour les pays du BRICS et du Sud est aujourd’hui menacé.

Toute politique susceptible de remettre en cause ce statut mettrait en danger la prospérité et la sécurité des États-Unis. Le problème, c’est que toute orientation financière ou commerciale rapprochant les principaux partenaires commerciaux américains de la Russie et de la Chine aurait
précisément cet effet.

The Conversation

Sambit Bhattacharyya bénéficie d’un financement de UK Research and Innovation, du Conseil de recherche économique et sociale, du Conseil australien de la recherche et du Conseil européen de la recherche.

ref. Suprématie du dollar : les tarifs douaniers de Trump sur l’Inde pourrait la fragiliser – https://theconversation.com/suprematie-du-dollar-les-tarifs-douaniers-de-trump-sur-linde-pourrait-la-fragiliser-265338

Le colonialisme et les risques climatiques sont liés : preuves issues du Ghana et du Sénégal

Source: The Conversation – in French – By Nick Bernards, Associate Professor of Global Sustainable Development, University of Warwick

L’expérience coloniale a profondément transformé les économies et les sociétés, avec des conséquences profondes.

En tant que chercheur intéressé par l’histoire coloniale et son impact sur le développement actuel, j’ai récemment étudié certains aspects de cet héritage à travers une analyse comparative du Sénégal et du Ghana, basée sur des recherches archivistiques antérieures.

Dans cet article, j’explore les liens entre les principales cultures d’exportation coloniales et les formes quotidiennes de vulnérabilité climatique rencontrées dans ces deux pays. Je montre comment les formes d’exploitation qui ont émergé dans le contexte du capitalisme colonial sont liées à la forme et à la répartition inégale des risques climatiques actuels. Ces histoires ont profondément influencé la manière dont les populations sont exposées à des températures record et à des régimes pluviométriques imprévisibles.

Il est de plus en plus reconnu que la dégradation du climat mondial et la vulnérabilité à ses effets sont profondément enracinées dans l’histoire du colonialisme. Cette reconnaissance s’est même frayé un chemin dans les cercles politiques officiels. Le sixième rapport d’évaluation du GIEC de 2022 (Groupe d’experts intergouvernemental sur l’évolution du climat, l’organe scientifique des Nations unies chargé du climat), par exemple, reconnaît que la vulnérabilité au changement climatique est « souvent rendue plus complexe par des événements passés, tels que l’histoire du colonialisme ».

Mes recherches aident à compléter ce tableau en montrant à quel point ces impacts sont complexes et profondément ancrés.

Répartition inégale

Les personnes les plus exposées à la crise climatique sont souvent celles qui ont le moins contribué à la créer. En tant que région, l’Afrique contribue à environ 4 % des émissions mondiales de CO₂. En effet, certaines estimations montrent que ce n’est qu’au cours de la dernière décennie que l’Afrique a collectivement émis plus de carbone qu’elle n’en stocke dans divers écosystèmes.




Read more:
Africa now emits as much carbon as it stores: landmark new study


Selon l’Organisation météorologique mondiale, les températures en Afrique augmentent plus rapidement que la moyenne mondiale. Selon des estimations récentes, les pertes économiques dues à la chaleur seule ont atteint 8 % du PIB dans une grande partie de l’Afrique entre 1992 et 2013.

Les puissances coloniales ont extrait des richesses se chiffrant en milliers de milliards des peuples et des territoires colonisés. Elles ont continué à le faire après la fin officielle de la domination coloniale. Les pays riches ont brûlé bien plus que leur juste part de combustibles fossiles au cours de ce processus.

Cela signifie que les pays colonisés, dotés d’infrastructures sous-développées et dont les citoyens sont appauvris, ont moins de capacités pour résister et réagir à des conditions météorologiques de plus en plus sévères.

Mais les liens entre le colonialisme et la vulnérabilité climatique ne se réduisent pas à ces indicateurs économiques ou aux seules émissions de carbone.

Les dégâts causés par les modèles économiques de l’époque coloniale

Dans mon article, je montre comment les modes de vie quotidiens spécifiques des populations exposées aux risques climatiques dans les anciens pays colonisés sont également étroitement liés à l’organisation des économies coloniales.

Les économies coloniales du Sénégal et du Ghana étaient dominées par des sociétés commerciales françaises et anglaises. Ces marchands ont profondément remodelé les structures économiques, en particulier au cours des dernières décennies du XIXe siècle et des premières décennies du XXe siècle.

L’une des stratégies utilisées par les marchands britanniques et français consistait à prendre le contrôle du commerce des matières premières (cacahuètes au Sénégal, cacao au Ghana) par le biais de chaînes de dettes. Grâce à des réseaux complexes de courtiers et de négociants, les marchands coloniaux fournissaient aux agriculteurs des intrants agricoles et des biens de première nécessité en échange des récoltes attendues.

Ce système protégeait largement les entreprises européennes contre les risques inhérents à l’agriculture, tels que les intempéries et les parasites.

Ce système a également entraîné un endettement de plus en plus important des agriculteurs locaux. Lorsque les gens devaient emprunter de l’argent ou des biens pour planter leurs cultures et survivre jusqu’à la récolte, cela avait tendance à contraindre les agriculteurs à produire les mêmes cultures pour l’exportation année après année.

Dans les deux pays, cela signifiait que la productivité des agriculteurs avait tendance à baisser au fil du temps en raison des problèmes liés aux parasites et à l’appauvrissement des sols. Souvent, la seule solution pour les agriculteurs, qui dans de nombreux cas avaient déjà vendu leurs récoltes à l’avance, était de cultiver de manière plus intensive.

Cela a eu pour effet d’aggraver à la fois l’endettement et la vulnérabilité aux risques écologiques. Les agriculteurs endettés étaient plus exposés aux mauvaises récoltes et les rendements agricoles étaient souvent réduits, ce qui les obligeait à dépenser de plus en plus pour les intrants. L’intensification des cultures a également accéléré l’érosion des sols et la propagation des parasites.

Le système colonial a également limité les investissements qui auraient pu améliorer la productivité ou offrir une meilleure protection contre les risques climatiques. Au Sénégal, par exemple, la culture coloniale de l’arachide dépendait principalement des eaux de pluie pour son approvisionnement en eau. Les responsables du gouvernement colonial ont rejeté les propositions de construction de systèmes d’irrigation, et les entreprises commerciales qui n’étaient pas directement impliquées dans la culture n’avaient guère d’intérêt à investir.

Les économies postcoloniales ont considérablement changé, mais des éléments importants du système commercial de l’époque coloniale sont néanmoins restés en place. Les principales cultures d’exportation dans les deux pays continuent d’être cultivées par de nombreux petits producteurs, et les moyens de subsistance de nombreuses personnes restent fortement dépendants des cultures commerciales.

Plus important encore, la forme de vulnérabilité climatique reflète étroitement les risques qui sont apparus à l’époque coloniale. La disponibilité imprévisible de l’eau, par exemple, reste l’une des formes les plus pressantes de vulnérabilité climatique dans les régions productrices d’arachides. C’est particulièrement le cas au Sénégal, où la culture de l’arachide reste largement dépendante de la pluie pour son approvisionnement en eau. Il en résulte, comme l’a montré une étude, que les niveaux de pauvreté dans les régions productrices d’arachides restent étroitement liés aux niveaux de précipitations.

Prochaines étapes

La situation n’est pas la même partout. L’un des héritages du colonialisme est qu’il a créé de nouveaux modèles de développement inégal et inéquitable au sein des colonies et entre elles. Dans des pays comme le Kenya et l’Afrique du Sud, la colonisation a entraîné l’installation de populations européennes. Les populations et les communautés africaines ont été déplacées pour faire place à des plantations et des mines. Les luttes pour l’accès à l’eau, pour ne citer qu’un exemple, restent fortement marquées par ces histoires.

Le fait est que l’empreinte du colonialisme sur la crise climatique est profonde et complexe. Le colonialisme ne s’est pas contenté d’extraire des richesses et des ressources. Il a profondément transformé les sociétés, les économies et les relations des populations avec le monde naturel.

Cela signifie que la dette climatique des pays riches envers le reste du monde dépasse la simple valeur des richesses extraites ou le volume de carbone émis. Elle est probablement incalculable et impossible à rembourser.

The Conversation

Nick Bernards a reçu un financement du Conseil de recherches en sciences humaines du Canada et de la British International Studies Association.

ref. Le colonialisme et les risques climatiques sont liés : preuves issues du Ghana et du Sénégal – https://theconversation.com/le-colonialisme-et-les-risques-climatiques-sont-lies-preuves-issues-du-ghana-et-du-senegal-264910

What babies’ cries really tell us – and why maternal instinct is a myth

Source: The Conversation – France in French (2) – By Nicolas Mathevon, Professeur (Neurosciences & bioacoustique – Université de Saint-Etienne, Ecole Pratique des Hautes Etudes – PSL & Institut universitaire de France), Université Jean Monnet, Saint-Étienne

The sound slices through the quiet of the night: a muffled sob, then a hiccup, quickly escalating into a high-pitched, frantic wail. For any parent or caregiver, this is a familiar, urgent call to action. But what is it a call for? Is the baby hungry? In pain? Lonely? Or simply uncomfortable? For generations, we’ve been told that understanding this primal language is a matter of intuition, a “maternal instinct” that allows a mother to divine her child’s needs. Society often reinforces this idea, creating an elite class of quasi-psychic super-parents who seem to know everything, and leaving many others feeling inadequate and guilty when they can’t immediately decipher the message.

As a bioacoustics researcher, I have spent years studying the communication of animals – from the soft calls of crocodile nestlings synchronizing their hatching and pushing the parent to dig the nest, to the calls of zebra finches allowing mate recognition. I was surprised to discover, upon turning my attention to our own species, that the cries of human babies hold as much, if not more, mystery. My colleagues and I have spent over a decade applying the tools of acoustic analysis, psycho-acoustic experiments and neuro-imagery to this intimate world. Our findings, detailed in my book, The intimate world of babies’ cries, challenge many of our most cherished beliefs and offer a new, evidence-based framework for understanding this fundamental form of human communication.

The first and perhaps most important thing to know is this: you cannot tell why your baby is crying just from the sound of the cry alone.

Busting the ‘language of cries’ myth

Many parents feel immense pressure to become “cry experts”, and an entire industry has sprung up to capitalise on this anxiety. There are apps, devices, and expensive training programmes all promising to translate cries into specific needs: “I’m hungry,” “change my diaper,” “I’m tired.” Our research, however, shows these claims are baseless.

To test this scientifically, we undertook a large-scale study. We placed automatic recorders in the rooms of 24 babies, recording them continuously for two days at a time at several ages during their first four months of life. This resulted in an enormous dataset of 3,600 hours of recordings containing nearly 40,000 cry “syllables”. The dedicated parents carefully logged the action that successfully soothed the baby, giving us a “cause” for each cry: hunger (soothed by a bottle), discomfort (soothed by a diaper change), or isolation (soothed by being held). We then used machine learning algorithms, training an artificial intelligence on the acoustic properties of these thousands of cries to see if it could learn to identify the cause. If there was a distinct “hunger cry” or “discomfort cry”, the AI should have been able to detect it.

The result was a resounding failure. The AI’s success rate was only 36% – barely above the 33% it would get by pure chance. To ensure this wasn’t just a limitation of technology, we repeated the experiment with human listeners. We had parents and nonparents first “train” on the cries of a specific baby, just as a parent would in real life, and then asked them to identify the cause of new cries from that same baby. They fared no better, scoring just 35%. The acoustic signature of a cry for food is not reliably different from a cry of discomfort.

This doesn’t mean parents can’t figure out what their baby needs. It simply means the cry itself is not an entry in a dictionary. The cry is the alarm bell. It is your knowledge of the essential context that allows you to decode it. “It’s been three hours since the last feeding, so they are probably hungry.” “That diaper felt full.” “They’ve been alone in the crib for a while.” You are the detective; the cry is simply the initial, undifferentiated alert.

What cries actually tell us

If cries don’t signal their cause, what information do they reliably convey? Our research shows they transmit two crucial pieces of information.

The first is static information: the baby’s unique vocal identity. Just as every adult has a distinct voice, every baby has a unique cry signature, primarily determined by the fundamental frequency (pitch) of their cry. This is a product of their individual anatomy – the size of their larynx and vocal cords. It’s why you can recognise your baby’s cry in a nursery. Interestingly, while babies have an individual signature, they do not have a sex signature. The larynxes of baby boys and girls are the same size. Yet, adults consistently attribute high-pitched cries to girls and low-pitched cries to boys, projecting their knowledge of adult voices onto infants.

The second, and more urgent, piece of information is dynamic: the baby’s level of distress. This is the most important message encoded in a cry, and it is conveyed not so much by pitch or loudness, but by a quality we call “acoustic roughness”. A cry of simple discomfort, from being a little cold after a bath, for instance, is relatively harmonious and melodic. The vocal cords vibrate in a regular, stable way. But a cry of real pain, as we recorded during routine vaccinations, is dramatically different. It becomes chaotic, rough, and grating. This is because the stress of pain causes the baby to force more air through their vocal cords, making the cords vibrate in a disorganised, non-linear way. Think of the difference between a clean note from a flute and the harsh, chaotic sound it makes when you blow too hard. This roughness, a collection of acoustic phenomena including chaos and sudden frequency jumps, is a universal and unmistakable signal of high distress. A melodious “wah-wah” means “I’m a bit unhappy,” while a rough, harsh “IIiiRRRRhh” means “This is serious!”.

It’s learning, not instinct

So, who is best at decoding these complex signals? The pervasive myth of “maternal instinct” suggests that mothers are biologically hard-wired for the task. Our work comprehensively debunks this. An instinct, like a goose’s fixed behaviour of rolling an egg back to its nest, is innate and automatic. Understanding cries is not like this at all.

In one of our key studies we tested mothers and fathers on their ability to identify their own baby’s cry from a selection of others. We found absolutely no difference in performance between the two. The single most important factor was the amount of time spent with the baby. Fathers who spent as much time with their infants were just as adept as mothers. The ability to decode cries is not innate; it is learned through exposure. We confirmed this in studies with non-parents. We found that childless adults could learn to recognise a specific baby’s voice after hearing it for less than 60 seconds. And those with prior childcare experience, like babysitting or raising younger siblings, were significantly better at identifying a baby’s pain cries than those with no experience.

This all makes perfect evolutionary sense. Humans are “cooperative breeders”. Unlike in many primates where the mother has a near-exclusive relationship with her infant, human babies have historically been cared for by a network of individuals: fathers, grandparents, siblings, and other members of the community. In some hunter-gatherer societies like the!Kung, a baby may have up to 14 different caregivers. A hard-wired, mother-only “instinct” would be a profound disadvantage for a species that relies on a team.

The brain on cries: experience rewires everything

Our neuroscientific research reveals how this learning process works. When we hear a baby cry, a whole network of brain regions, called the “baby-cry brain connectome”, springs into action. Using MRI scans, we’ve observed that cries activate auditory centres, the empathy network (allowing us to feel another’s emotion), the mirror network (helping us put ourselves in another’s shoes), and areas involved in emotion regulation and decision-making.

Crucially, this response is not the same for everyone. When we compared the brain activity of parents and nonparents, we found that while everyone’s brain responds, the “parental brain” is different. Experience with a baby strengthens and specialises these neural networks. For example, parents’ brains show greater activation in regions associated with planning and executing a response, while nonparents show a more raw, untempered emotional and empathetic reaction. Parents shift from simply feeling the distress to actively problem-solving. Furthermore, we found that individual levels of empathy – not gender – were the strongest predictor of how intensely the brain’s “parental vigilance” network activated. Caring is a skill that is honed through practice, and it physically reshapes the brain of any dedicated caregiver, male or female.

Why this matters: from coping to cooperation

Understanding the science of crying is not just an academic exercise; it has profound real-world implications. Incessant crying, especially from colic (which affects up to a quarter of infants), is a primary source of parental stress, sleep deprivation, and exhaustion. This exhaustion can lead to feelings of failure and, in the worst cases, can be a trigger for shaken baby syndrome, a tragic and preventable form of abuse.

The knowledge that you are not supposed to “just know” what a cry means can be incredibly liberating. It removes the burden of guilt and allows you to focus on the practical task: check the context, assess the level of distress (is the cry rough or melodic?), and try solutions. Most importantly, the science points to our species’ greatest strength: cooperation. The fact that any human can become an expert caregiver through experience means you are not meant to do this alone. The unbearable cries become bearable when they can be passed to a partner, a grandparent, or a friend for a much-needed break.

So, the next time you hear that piercing cry in the night, remember what it truly is: not a test of your innate abilities or a judgement on your parenting skills, but a simple, powerful alarm. It’s a signal designed to be answered not by a mystical instinct, but by a caring, attentive and experienced human brain. And if you’re feeling overwhelmed, the most scientifically sound and evolutionarily appropriate response is to ask for help.


Nicolas Mathevon is the author of The intimate world of babies’ cries: The best ways to understand and calm your baby.


A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!


The Conversation

Nicolas Mathevon has received funding from the ANR, IUF, and Fondation des Mutuelles AXA.

ref. What babies’ cries really tell us – and why maternal instinct is a myth – https://theconversation.com/what-babies-cries-really-tell-us-and-why-maternal-instinct-is-a-myth-264525

Molecular ‘fossils’ offer microscopic clues to the origins of life – but they take care to interpret

Source: The Conversation – USA – By Caroline Lynn Kamerlin, Professor of Chemistry and Biochemistry, Georgia Institute of Technology

ATP synthase is an enzyme that has been using phosphate to generate life’s energy for millions of years. Nanoclustering/Science Photo Library via Getty Images

The questions of how humankind came to be, and whether we are alone in the universe, have captured imaginations for millennia. But to answer these questions, scientists must first understand life itself and how it could have arisen.

In our work as evolutionary biochemists and protein historians, these core questions form the foundation of our research programs. To study life’s history billions of years ago, we often use clues called molecular “fossils” – ancient structures shared by all living organisms.

Recently, we discovered that an important molecular fossil found in an ancient protein family may not be what it seems. The dilemma centers, in part, on a simple question: What does it mean if a simple molecular structure – the fossil – is found in every single organism on Earth? Do molecular fossils point to the seeds that gave rise to modern biological complexity, or are they simply the stubborn pieces that have resisted erosion over time? The answers have far-reaching implications for how scientists understand the origins of biology.

Follow the phosphorus to follow life

Life is made of many different building blocks, one of the most important of which is the chemical element phosphorus. Phosphorus makes up part of your genetic material, powers complex metabolic reactions and acts as a molecular switch to control enzymes.

Phosphorus compounds – specifically a charged form called phosphate – have a number of unique chemical properties that other biological compounds cannot match. In the words of the pioneering organic chemist F.H. Westheimer, they are chemically able to “do almost everything.”

Their unique combination of stability, versatility and adaptability is why many researchers argue that following phosphorus is key to finding life. The presence of phosphorus both close to home – in the ocean or on one of Saturn’s moons – and in the farthest reaches of our galaxy is strong evidence for the potential for life beyond Earth.

Chemical structure of a nucleotide, made of a phosphate, ribose sugar and base
Phosphate is part of many essential biological molecules, including the building blocks of DNA.
Charles Molnar and Jane Gair, CC BY-SA

If phosphorus is so critical to life, how did early biology predating cells first use it?

Today, biological organisms are able to make use of phosphates through proteins – molecular machines that regulate all aspects of life. By binding to proteins, phosphates regulate metabolism and cellular communication, and they serve as a source of cellular energy.

Further, the process of phosphorylation, or adding a phosphate group to a protein, is ubiquitous in biology and allows proteins to perform functions their individual building blocks cannot. Without proteins, the existence of organisms such as bacteria and humans may not be possible.

Given how essential phosphorus is to life, scientists hypothesize that phosphate binding was among the first biological functions to emerge on Earth. In fact, current evidence suggests that the first phosphate-binding proteins are truly ancient – even older than the last universal common ancestor, the hypothetical mother cell to all life on Earth that existed around 4 billion years ago.

A mysterious phosphate-binding fossil

One family of phosphate-binding proteins, called P-loop NTPases, regulates everything from the communication between cells to the storage of energy and are found across the tree of life. Because P-loop NTPases are among the most ancient protein families, analyzing their properties can provide key insights into both the emergence of proteins and how primitive life used phosphates.

Although P-loop NTPases are diverse in structure, they share a common motif called a P-loop. This component binds to phosphate by wrapping a nest of amino acids – the building blocks that make up proteins – around the molecule. Every known organism has multiple families of P-loop NTPase, which makes the P-loop an excellent example of a molecular fossil that can provide clues about the evolution of life. Our crude analysis of the human genome estimates that humans have about 5,000 copies of P-loops.

When part of a larger protein structure, the P-loop folds like origami into a shape that is ideal for hugging a phosphate molecule. These nests are extremely similar to each other, even when the surrounding proteins are only distantly related in function. A landmark study in 2012 argued that even if the P-loop nest is extracted from a protein, it can still bind to phosphate. In other words, the ability of a P-loop to form a nest is determined by its interactions with phosphate, not its protein scaffold.

This study provided the first evidence that some forms of the P-loop sequence could have functioned billions of years ago, even before the emergence of large, complex proteins. If true, this implies that P-loop nests may have seeded the emergence and evolution of many of the phosphate-binding proteins seen today.

Interrogating the history of the P-loop

The pioneer of bioinformatics, Margaret Oakley Dayhoff, hypothesized in 1966 that the large collection of big proteins seen today arose from small peptides that were duplicated and fused over long periods of time. Although P-loops may have evolved in a different way, Dayhoff’s realization was the first to clarify how complex forms could have arisen from much simpler ones.

Inspired by Dayhoff’s hypothesis, we sought to interrogate the role that simple P-loops may have played in the evolution of the complex proteins key to life. Our findings challenge what’s currently known about these molecular fossils.

Diagram showing the evolution of amino acids to oligopeptides to complex proteins
The Dayhoff hypothesis proposed that large, complex proteins arose from the duplication and merging of smaller, simpler peptides over time.
Merski et al./Biomolecules, CC BY-SA

Using computer models, we compared a range of P-loops from the P-loop NTPase family to a control group made of the same amino acids but in a different order. While these control loops are also found in proteins, they do not form nests.

Although the P-loops and the control loops are very different in their nest-forming ability, we found that they both are able to form transient nests when embedded in proteins. This meant that, contrary to popular belief, the amino acid sequence of P-loops aren’t special in their ability to form nests – as would be expected if they alone were the seeds for many modern proteins.

A fossil eroded over time

Our work strongly suggests that while the P-loop is a molecular fossil, the true nature of its form billions of years ago may have been eroded by the sands of time.

For example, when we repeated our simulations in a different solvent – specifically methanol – we found that P-loops situated in their parent proteins were able to regain some of their ability to form nests. This doesn’t mean that being in methanol drove the first proteins with P-loops to form the nests critical for life. But it does emphasize the importance of considering the surrounding environment when studying peptides and proteins.

Just as archaeologists know to be careful in how they interpret physical fossils, historians of protein evolution could take similar care in their interpretation of molecular fossils. Our results complicate the current understanding of early protein evolution and, consequently, some aspects of the origins of life.

In resetting the field’s broader understanding of how these crucial proteins emerged, scientists are poised to start rewriting our own evolutionary history on this planet.

The Conversation

Caroline Lynn Kamerlin receives funding from the NASA Exobiology program.

Liam Longo receives funding from the NASA Exobiology program.

ref. Molecular ‘fossils’ offer microscopic clues to the origins of life – but they take care to interpret – https://theconversation.com/molecular-fossils-offer-microscopic-clues-to-the-origins-of-life-but-they-take-care-to-interpret-259271

Identifying as a ‘STEM person’ makes you more likely to pursue a STEM job – and caregivers may unknowingly shape kids’ self-identity

Source: The Conversation – USA – By Remy Dou, Associate Professor of Teaching and Learning, University of Miami

Kids seem to get a message that STEM jobs aren’t compatible with being a primary caregiver. kali9/E+ via Getty Images

Employers in science, technology, engineering and mathematics – commonly called the STEM industries – continue to struggle to attract female applicants. In its 2024 jobs report, the National Science Board found that men outnumber women almost 3-to-1 in STEM jobs that require at least a bachelor’s degree and over 8-to-1 in STEM jobs that don’t, such as electrical, plumbing or construction work.

Despite women being just as academically prepared for many STEM roles as men, if not more so, and the fact that STEM jobs offer higher salaries and greater job security than non-STEM jobs, men continue to dominate this section of the workforce.

I am a social scientist who studies the relationship between education, identity and science, and since 2019, I’ve led the Talking Science research and development group. One question we’ve sought to answer is why employers continue to struggle recruiting talented women to the STEM workforce.

Our team recently carried out a study where we discovered that how caregivers, especially mothers, talk about STEM topics may significantly shape their children’s interest in STEM careers.

Are you a math person?

As a researcher, whenever I give a public talk I like to ask the audience, “Who here is not a math person?” Without fail, several hands shoot up faster than if I had asked, “Who wants free money?”

It turns out that most people are well aware of their own relationship to STEM fields and may see themselves as a math, science or “STEM” person, or, commonly, not a STEM person. Researchers like me call this kind of self-identification a “STEM identity,” and almost everyone has one. Although any given person can have a very high STEM identity or a very low one, most individuals fall somewhere in between.

Having a high STEM identity strongly predicts whether a student will choose to pursue a career in STEM. Research shows that if children don’t develop a high STEM identity by eighth grade, they are unlikely to ever pursue a STEM career.

This finding raises the question: What childhood experiences shape children’s STEM identities?

Individuals come to identify with different groups by recognizing characteristics they share with members of those groups. In many cases, people learn about the characteristics of a group through direct experience. For example, elementary-age children often see teaching as a female occupation when they encounter mostly female teachers at their school. Most children, however, never spend enough time with a scientist to form a stereotype directly.

Children learn most of what they know about STEM professionals indirectly through depictions of scientists in their social environment. Once children have formed a stereotype in their minds, they then compare themselves to these stereotypes to determine whether they are, or could be, a STEM person.

In the United States, five decades of the “draw-a-scientist” studies reveal that children asked to depict scientists overwhelmingly draw them as male – illustrating a persistent stereotype linking science and masculinity. While a growing body of research shows that in recent years gender-based stereotypes of STEM workers have decreased significantly, STEM workforce employment patterns contradict this finding.

A missing explanation?

Since social stereotypes about scientists are becoming less gender-biased, our team realized that something else must be causing children to carry male-biased views of STEM into young adulthood. The Talking Science team believed that understanding why some women see themselves as STEM people and want to obtain STEM jobs held the key to understanding the gap between decreasing social stigma and the persistent lack of women in STEM.

To understand this phenomenon more deeply, our team interviewed 20 college students, 13 of whom identified as female. We intentionally selected these students because of their positive STEM identities and enrollment in college STEM programs.

During 60-to-90-minute interviews, we asked participants to list the various people who positively or negatively shaped their academic and professional interests. We then asked students to label each of them as either a “STEM person,” “not a STEM person” or somewhere in between. Finally, we invited each student to explain why they assigned each label.

The students mentioned 102 individuals – including parents, aunts, siblings, friends and teachers – as influential in shaping their STEM identities. Our team then assigned a gender to these individuals based on pronouns and other descriptors the interviewees used.

A gender gap clearly emerged. Women were only about 40% of those described as STEM people and 70% of the individuals described as not STEM people. This latter group almost always included our interviewees’ mothers.

man and boy working with tools on a robot toy
Among those whom students named as influential in shaping their own STEM identity, the majority were male.
athima tongloom/Moment via Getty Images

Updating stereotypes about STEM workers

When first examining the data, we assumed that college students didn’t recognize their mothers as STEM people because of gender stereotypes. Some students were reluctant to describe their mothers as STEM people even when both parents worked in STEM professions – in one case, both parents even held the same college STEM degree.

After closer examination, we noticed that a few students labeled their fathers as not a STEM person. These fathers shared one thing in common with mothers labeled the same way: They all played the role of primary caregiver.

Even in cases where mothers or fathers held a college degree in a STEM field, students consistently diminished the STEM identity of the parent who took on the bulk of the child-rearing responsibilities. As a result, we recognized that something other than gender contributed to students’ perceptions of their parents’ STEM identities.

When pressed to describe why they did not see their primary caregivers as STEM people, our interviewees generally pointed to two things: failure to display STEM interests and failure to display STEM knowledge.

When asked about their parents’ STEM interests, most interviewees described parenting as an all-consuming task that doesn’t leave room for STEM. However, this view generally did not apply to both mothers and fathers, but rather to the parent taking on the role of primary caregiver.

Similarly, most students pointed to the parent who often engaged in conversations about STEM topics as more knowledgeable, and this view also tended to exclude the primary caregiver.

Why what parents demonstrate matters

Children who grow up with the expectation of becoming a primary caregiver may associate their own caregivers’ limited displays of STEM interests and knowledge as par for the course. And because the role of primary caregiver continues to be associated with women, it’s possible for some girls to grow up believing that being a committed parent and a STEM person are incompatible roles.

Of course, STEM workers have families, and many, both men and women, are primary caregivers at home. But stereotypes are hard to break. If STEM industries want to attract more women, or if parents want their daughters to grow up to become STEM professionals, then children need to see parenthood and STEM jobs as compatible.

When parents talk to their children about their STEM-related interests and share their knowledge, children are more likely to learn that they can grow up to be both a parent and a STEM person. This approach can have an outsize effect on young women who grow up with the expectation of raising a family one day.

Creating opportunities for children to encounter female role models who are in the STEM professions is vital for attracting and recruiting women to STEM fields. Our study suggests it’s also crucial for children to see scientists and engineers as parents and caregivers with children of their own.

The Conversation

Remy Dou offer pro bono consulting services to Tumble Science Podcast for Kids and Cumbre Kids.

ref. Identifying as a ‘STEM person’ makes you more likely to pursue a STEM job – and caregivers may unknowingly shape kids’ self-identity – https://theconversation.com/identifying-as-a-stem-person-makes-you-more-likely-to-pursue-a-stem-job-and-caregivers-may-unknowingly-shape-kids-self-identity-254771

Fed, under pressure to cut rates, tries to balance labor market and inflation – while avoiding dreaded stagflation

Source: The Conversation – USA (2) – By Jason Reed, Associate Teaching Professor of Finance, University of Notre Dame

Interest rates are a tricky balancing act, as Fed Chair Jerome Powell knows well. AP Photo/Alex Brandon

The Federal Reserve is in a nearly impossible spot right now.

Markets are expecting a quarter-point interest rate cut to a range of 4% to 4.25% when the Fed policy-setting committee concludes its latest meeting on Sept. 17, 2025. After all, the slowdown in the jobs market, as well as a massive revision to past figures showing close to a million fewer jobs were created than previously reported, makes a strong case for lower interest rates to shore up the economy.

But at the same time, inflation – the other component of the Fed’s dual mandate – has begun to accelerate again. As rising tariffs squeeze consumer spending in sectors exposed to the harshest tariffs – such as clothing and electronics – other inflationary pressures loom over the horizon.

A slowing economy or rising inflation is a circumstance that policymakers want to avoid. But as an economist and finance professor, I’m increasingly concerned about the risk that they happen at the same time – a horrible economic condition known as stagflation – and that the Fed may be too slow in responding.

Between a rock and a hard data point

The Fed has been under pressure to cut rates for some time – including from President Donald Trump.

The reason markets and the White House are so interested is because what the Fed does matters. The central bank’s decision at its near-monthly meetings helps banks and other lenders to determine rates on auto loans, mortgages, credit cards and more. Lower rates usually lead more businesses and consumers to borrow and spend more, boosting economic activity. This also can drive up inflation.

For the better part of three years, the central bank has been focused on its generational fight against inflation. But now, with inflation down significantly from its 40-year high of 9% reached in 2022 and the jobs market sputtering, conditions finally seemed right to resume cutting rates.

The labor market has seen continued deterioration, most notably with the Bureau of Labor Statistics’ revisions to nonfarm payrolls – in effect reducing the number of jobs economists thought the U.S. gained by almost 1 million for the year ending in March 2025.

But a recent uptick in inflation has made the Fed’s call more complicated.

Over the past four months, the consumer price index has consistently ticked up, with the most recent CPI figure indicating year-over-year inflation of 2.9% – well above the Fed’s target of 2%.

Switching focus to jobs

At the Fed’s last meeting in August, Chair Jerome Powell said that the risks to the labor market now exceed the risks of inflation.

For example, for the first time since 2021, the number of unemployed people have outpaced job vacancies as companies have moved to eliminate open positions before laying off workers.

Most compelling is the so-called U6 unemployment rate – which includes those in the regular unemployment figures and people who have stopped looking for jobs, as well as those who are working part time but are looking for full-time opportunities. That has increased over the past three months to 8.1%.

The evidence suggests that businesses are reluctant to add workers as tariff policy and broad economic uncertainty appear to drive hiring decisions.

a black-and-white photo shows classic cars and a man pushing a lawnmower in a long line on the road
The last time there was stagflation was the 1970s, which led to long lines for cars ≠ and mowers – at the gas stations.
AP Photo

The worst of both worlds

The short-term risk here is that a quarter-point cut won’t be enough to shore up the jobs market, and it may be too late to prevent the economy from tipping into recession.

The longer-term risk is more concerning: Not only could the economy contract, but it could do so while inflation accelerates.

The last time the U.S. experienced stagflation was in the 1970s, when an oil embargo caused the price of crude to double. This drove up inflation while causing unemployment to soar and the economy to stall. Policies aimed at reducing inflation typically exacerbate slowing growth, and vice versa. In other words, there were fewer dollars to go around – and those dollars were worth a little less every day.

The pain experienced during this previous bout of stagflation convinced a generation of economists and policymakers that the condition was to be avoided at all costs.

The Fed, which has consistently shown its hand and has guided the markets toward this week’s rate cut, now has to make what seems like an impossible decision: cut rates even if doing so will add inflationary pressures.

And there are other potential headwinds for the U.S. economy. For example, it has yet to fully absorb the impact of Trump’s immigration crackdown on productivity and output due to the loss of workers. Waning consumer confidence suggests consumer spending could soon drop. And a potential federal government shutdown looms in September.

In my view, it’s clear that a cut is warranted. But will it drive up inflation? Economists like me will be watching this closely.

The Conversation

Jason Reed does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Fed, under pressure to cut rates, tries to balance labor market and inflation – while avoiding dreaded stagflation – https://theconversation.com/fed-under-pressure-to-cut-rates-tries-to-balance-labor-market-and-inflation-while-avoiding-dreaded-stagflation-265361

US women narrowed the pay gap with men by having fewer kids

Source: The Conversation – USA (2) – By Alexandra Killewald, Professor of Sociology, University of Michigan

Women typically earn less than men per hour that they work. MoMo Productions/DigitalVision via Getty Images

Women in the U.S. typically earned 85% as much as men for every hour they spent working in 2024. However, working women are faring much better than their moms and grandmothers did 40 years ago. In the mid-1980s, women were making only 65% as much as men for every hour of paid work.

Women’s wages have improved relative to what men earn in part because of gains in their education and work experience, and because women have moved into higher-paying occupations. But progress toward pay equality has stalled.

As sociologists and demographers, we wanted to know whether changes in American families might also have helped women come closer to pay equality with men. In an article published in June 2025 in Social Forces, an academic journal, we argued that this pay gap is becoming smaller in part because women are having fewer children.

Moms earn less but dads earn more

In the U.S. and elsewhere, ample evidence shows that parenthood affects men’s and women’s wages differently.

Compared to remaining childless, motherhood leads to wage losses for women. And those losses are larger when women have more kids.

By contrast, after men become fathers their wages usually rise.

Because having kids tends to push women’s wages down and men’s wages up, parenthood widens the gender pay gap.

Young girls play with their father and pet the dog sitting on his lap.
When men have kids, it doesn’t depress their wages the way it does for women.
MoMo Productions/Stone via Getty Images

Decline in birth rate plays a role

Americans are having fewer kids in general. Women, including those who don’t work outside the home, had an average of about three children by their 40s in 1980. By 2000, that average had fallen to 1.9, and it has been fairly stable since then.

To see whether changes in how many kids working American moms have affects what they earn relative to men, we analyzed data collected from a nationally representative sample of U.S. families. We tracked trends over time in the number of children that employed Americans ages 30-55 have.

We found that employees’ average number of children fell significantly between 1980 and 2000, declining from around 2.4 to around 1.8. That average stabilized after 2000; employees had an average of about 1.8 children in 2018 – the most recent year in our analysis.

At the same time, the pay that women in this age range earned per hour relative to men rose steeply. It climbed from 58% in 1980 to 69% by 1990 and then rose more gradually to 76% by 2018. That is, as people were having fewer kids, the gender pay gap got smaller. For both trends, there was rapid change in the 1980s, followed by slower change after 1990.

We next estimated whether declines in the number of children men and women have can explain the narrowing of the gender pay gap between 1980 and 2018.

We found that, even after adjusting for other factors, such as years of education, prior work experience and occupation, about 8% of the decline in the gender pay gap can be explained by the lower number of children working women and men are having.

Next, we showed that the number of children American employees had declined faster in the 1980s than later on. That slowdown coincided with a deceleration of women’s gains in pay relative to men. Once the average number of children that U.S. employees had stabilized around 2000, so did women’s progress toward earning as much as men.

Questions about the future of US fertility

U.S. scholars and policymakers are debating whether and why Americans are having fewer children today than one or two decades ago, and what the government should do about it.

We agree that these are important questions.

Our research shows that any future changes in how many children Americans have are very likely to affect how quickly women and men reach pay equality. But it’s not inevitable.

The number of children Americans have affects the gender pay gap only because parenthood decreases women’s wages while increasing men’s wages. As long as these unequal effects of parenthood on what men and women earn persist, they will continue to act as a brake on women’s progress toward equal pay.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. US women narrowed the pay gap with men by having fewer kids – https://theconversation.com/us-women-narrowed-the-pay-gap-with-men-by-having-fewer-kids-261811

Does anyone go to prison for federal mortgage fraud? Not many, the numbers suggest

Source: The Conversation – USA (2) – By Jay L. Zagorsky, Associate Professor Questrom School of Business, Boston University

Go directly to jail? Not quite. Sergey Chayko/Getty Images Plus

Mortgage fraud is back in the news. Lisa Cook, a Federal Reserve governor, is being investigated by the Department of Justice for allegedly making false statements when applying for a mortgage. Members of Donald Trump’s Cabinet are accused of similar wrongdoings. Could any of these people go to prison?

Mortgage fraud is not a new problem. Subprime mortgage fraud fueled the 2008 financial meltdown, when large numbers of very risky mortgages defaulted. Mortgage fraud was also a key feature of the savings and loan crisis in the 1980s.

Mortgage applications are very long, so there’s plenty of opportunity to make mistakes. Plus, they require borrowers to declare that everything is “true, accurate, and complete.” Misrepresentation can trigger potentially large civil and criminal penalties.

As a business school professor, I was curious how many people are convicted of mortgage fraud today. After all, relatively few people went to jail for fraudulent loans back in 2008. Since most mortgage fraud violates federal law, I looked at more than a decade of federal conviction data. What I found was clear: Almost no one has gone to federal prison recently for lying on a mortgage application.

What is mortgage fraud?

Mortgage fraud is when someone intentionally misrepresents facts in order to obtain a property loan. People can lie about many things on a mortgage application, such as their income, assets or employment status, or whether they will occupy the home being purchased or rent it out.

Being caught lying to get a mortgage can be costly. The maximum federal sentence is 30 years, with fines of up to US$1 million. Because more than a quarter of all mortgages are guaranteed by federal agencies, and many are acquired by quasi-government organizations like Freddie Mac and Fannie Mae, most mortgage fraud is a federal crime.

However, just because there are laws on the books doesn’t mean they’re enforced. For example, I work in Boston, where for years jaywalking has been illegal – but as any visitor quickly notices, no one pays any attention to this rule.

How many people are convicted?

The U.S. Sentencing Commission provides detailed data on every person convicted of federal crimes since 2013. The database is large, since federal courts convict almost 70,000 people each year.

However, very few people are convicted of federal mortgage fraud. Just 38 people in the country were sentenced for such crimes in 2024, and among that small group, four of the convicted got no prison time. A year earlier, just 34 people were convicted and seven avoided prison.

Over the past dozen years, fewer than 3,000 people were convicted of federal mortgage fraud, and the number of people sentenced fell steadily each year.

Three thousand people are a tiny fraction of mortgages issued. The Consumer Financial Protection Bureau estimates that almost 100 million new mortgage loans were written to purchase or refinance a home over the past 12 years. For those who like precision, 3,000 is only 0.003%.

The Sentencing Commission’s files also offer insight into who gets convicted of mortgage fraud. Three-quarters were men. More than 90% were U.S. citizens. The typical person convicted of mortgage fraud is a man in his late 40s with an associate degree, the data suggests.

The real penalty

While the maximum penalty is 30 years, almost no one serves that long a sentence. In 2024, the maximum sentence handed out was just 10 years. Since 2013, 15% of those convicted got no jail time. The average sentence for people who did get jail time was 21 months, which is less than two years behind bars.

Fines are also much lighter in practice than the maximum $1 million penalty. In 2024, the maximum fine passed down was a quarter-million dollars. Since 2013, the average person convicted of mortgage fraud paid a fine of less than $6,000, with over half of all those convicted paying no fine at all.

Now not paying a fine or only paying a small one doesn’t mean there’s no financial penalty. The courts required most of those convicted to make restitution. In 2024, half of all people convicted had to pay at least a half-million dollars to reimburse their victims, such as lending companies. Over the dozen years I looked at, the average person convicted paid $2 million in restitution for their misdeeds.

More lightning strikes than convictions

It’s impossible to know how common mortgage fraud really is. Some mortgage applications are rechecked in a “post-closing audit.” However, these audits happen within 90 days after the mortgage money is disbursed. Beyond that window, if a loan is paid back on time and without problems, there’s little incentive for a bank or mortgage service provider to recheck an applicant’s information.

What is clear is that while millions of mortgages are written each year, only a tiny fraction of mortgage recipients go to jail for fraud. One way to put this tiny fraction into perspective is to compare it with the National Weather Service estimates of the approximately 270 people hit by lightning yearly. Last year, lightning hit over seven times more people than the federal government convicted of mortgage fraud.

Years ago, I filled in a mortgage application to buy a home. I was consumed with dread wondering if any application mistake would result in my being sent to jail. After looking at the mortgage fraud conviction data, I should have been more worried about being hit by lightning.

The Conversation

Jay L. Zagorsky does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Does anyone go to prison for federal mortgage fraud? Not many, the numbers suggest – https://theconversation.com/does-anyone-go-to-prison-for-federal-mortgage-fraud-not-many-the-numbers-suggest-265242

After Charlie Kirk’s murder, the US might seem hopelessly divided – is there any way forward?

Source: The Conversation – USA (2) – By Lee Bebout, Professor of English, Arizona State University

Many people think the U.S. is at an inflection point. StudioM1/iStock via Getty Images

Shortly following the fatal shooting of conservative activist Charlie Kirk, many politicians and pundits were quick to highlight the importance of civil discourse.

Utah Gov. Spencer Cox called for an “off-ramp” to political hostilities, while California Gov. Gavin Newsom released a statement condemning political violence. He lauded Kirk’s “commitment to debate,” adding, “The best way to honor Charlie’s memory is to continue his work: engage with each other, across ideology, through spirited discourse.” Political commentator Ezra Klein wrote, “You can dislike much of what Kirk believed and the following statement is still true: Kirk was practicing politics in exactly the right way.”

With so many Americans consuming political content via siloed social media feeds and awash in algorithms that stoke outrage, these ideals may seem quaint, if not impossible.

Clearly, murder is a no-go. But what does it mean to practice politics “the right way?” How can people engage “across ideology” in a “spirited” way?

Well, one way to not practice politics the right way is to limit the other side from having a voice of authority. Since 2016, the organization Kirk co-founded, Turning Point USA, has hosted the Professor Watchlist. The online database generated harassment campaigns against professors, leading to calls for firings, hate mail and death threats. To be sure, the left has not been without its own excesses of harassment in recent years.

Kirk was also known for going to college campuses and speaking to students: entering the lion’s den and affably challenging audiences to “change my mind.”

To me, the impulse to shut down the other side, combined with the “change my mind approach” to debate, has only exacerbated political polarization and entrenchment. Instead, I propose a few different ways of thinking about conversations with people whose views differ from your own.

The fantasy of swiftly changing minds

In my forthcoming book, “Rules for Reactionaries: How to Maintain Inequality and Stop Social Justice,” I explore the language strategies used to advance white supremacy and anti-feminism across U.S. politics and culture.

Deliberative democracy is the idea that decision-making and governance are arrived at through thoughtful, reasoned and respectful dialogue. This may take the shape of debates in Congress or robust questioning in town halls. But deliberative democracy also shapes the way all neighbors or citizens treat each other, whether on the street or at the dinner table.

I contend that a big stumbling block that prevents the U.S. from tackling its biggest problems is how Americans conceptualize deliberative democracy: There’s a fantasy that people’s minds can be easily changed, if only they’re given certain information or hear certain arguments.

In the 1990s, this was epitomized through former President Bill Clinton’s Initiative on Race, a program that he framed as a vehicle for social and political transformation. Clinton believed that an advisory board of experts could foster a meaningful national dialogue and produce necessary healing.

In response, conservative political figures objected both to the need for a conversation in the first place and to the makeup of the committee leading it.

By the time Clinton’s second term ended, the initiative quietly disappeared, only to be mentioned in passing in Clinton’s memoir. Yet with each subsequent racial flash point, from the arrest of Henry Louis Gates in 2009 to the murder of George Floyd, calls resurfaced for the national conversation. But race remains a politically and culturally salient issue.

Similarly, many Americans view friends, relatives and colleagues as targets for conversion. Because of the nature of my research, I often get a version of this question from my students: “How do you change someone’s mind if they say they’re a socialist?” Or they may frame it as, “I’ve got Thanksgiving with my family coming up, and my Uncle Johnny is so transphobic. How do I convince him to support trans rights?”

Cultural theorist Lauren Berlant would describe these encounters as moments of cruel optimism. There’s the belief that what you’re about to do is good and worthy. But time and again, you’re met with feelings of futility and frustration.

When debating politics, many people crave a chance to engage with someone they disagree with. There’s the hope of changing hearts and minds. But few minds – if any – change that quickly, and approaching these conversations as small windows of opportunity ends up being their downfall.

Opening minds instead of changing them

There are more fruitful approaches to conversation than merely trying to best someone in an argument by deploying buzzwords or “gotcha!” moments.

Rather than trying to immediately change someone’s mind, what if you entered a conversation with the goal of simply planting seeds? This approach transforms the dialogue from an attempted conversion into a legitimate conversation, wherein you’re merely offering your partner something to consider after the fact.

Another strategy involves remembering that conversations often have multiple audiences.

Consider the Thanksgiving dinner with Uncle Johnny. What if, instead of focusing on trying to convert him, the speaker recognized that there were other listeners at the table? Perhaps they could rethink their encounter not as converting an opponent, but as modeling to relatives how to have a conversation about one’s values with a loved one whom they vehemently disagree with. Or perhaps the speaker could recognize that a cousin at the table may be closeted, and take it upon themselves to model how to push back against transphobia.

In both cases, the conversion of Uncle Johnny ceases to be the objective. Civic dialogue and persuasion remain.

Change is slow but never futile

If the U.S. is going to heal its civic life through dialogue, I think it will require Americans to not just speak with those they disagree with, but to listen to them as well.

Krista Ratcliffe, a scholar of rhetoric at Arizona State University, has written about her concept of “rhetorical listening.” Listeners, she argues, must not simply be attuned to the words a speakers says, but also to the life experiences and ideologies that shape those words.

Rhetorical listening means avoiding the urge to one-up the opponent or convert the unwashed masses. Instead, you’re entering into dialogue from a position of curiosity, with a willingness to learn and grow.

Many people believe that the U.S. is at an inflection point. Will families and friendships continue to be torn apart? Will greater political polarization lead to more violence? Often it feels hopeless.

Like Sisyphus, many Americans probably feel like they continue to push a boulder up a hill, only for it to roll down the other side. The error would be for Americans to be surprised when the boulder rolls back down – shocked that there was no progress and that everyone has to start over again.

While the Sisyphean task of deliberative democracy requires that citizens push the boulder day in and day out, they should also recognize that as they push, the weight of the boulder as it’s collectively pushed will gradually and imperceptibly alter the terrain.

Moreover, as the French philosopher Albert Camus once wrote, it’s important to “imagine Sisyphus happy” – to continue to seize what joy can be had as this hard work plods along.

The Conversation

Lee Bebout does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. After Charlie Kirk’s murder, the US might seem hopelessly divided – is there any way forward? – https://theconversation.com/after-charlie-kirks-murder-the-us-might-seem-hopelessly-divided-is-there-any-way-forward-265248