¿Es compatible la energía eólica marina con la protección del océano? El caso mediterráneo

Source: The Conversation – France – By Paul Wawrzynkowski, PhD candidate, Universitat de Barcelona

Bjoern Wylezich/Shutterstock

El océano, motor de vida y regulador climático, se enfrenta a una encrucijada. La urgencia por descarbonizar la economía nos lleva a desplegar masivamente energías renovables, entre las que se encuentran las marinas, como los parques eólicos fijos y flotantes. Simultáneamente, el Marco Mundial de la Diversidad Biológica Kunming-Montreal exige proteger al menos el 30 % del océano para 2030. Esta aparente colisión de objetivos plantea un desafío crítico: ¿podemos lograr la transición energética sin comprometer la ya vulnerable biodiversidad oceánica?

Auge de la energía marina

El cambio climático es uno de los mayores desafíos de nuestro siglo y la energía renovable es clave para mitigarlo al permitir la reducción de emisiones procedentes de fuentes fósiles. La energía marina, liderada por la eólica, desempeña un papel creciente en este sentido, con un potencial emergente en la obtención de energía a partir de olas (undimotriz) y mareas (mareomotriz).

La Unión Europea ha apostado por la energía eólica marina como uno de los pilares de su estrategia para la descarbonización de la economía. El Pacto Verde Europeo y la Estrategia de Energía Renovable Marina prevén una expansión espectacular de esta tecnología: de 29 gigavatios (GW) en 2019 a 300 GW en 2050.

Este crecimiento de diez veces en apenas tres décadas es esencial para alcanzar la neutralidad climática en 2050, impulsando además la innovación, el empleo y la seguridad energética en Europa.

Un escudo para el océano: el “30×30”

Pero esta carrera por la energía limpia coincide con otra emergencia global: la crisis de biodiversidad. Las actividades humanas ya han alterado el 66 % de la superficie oceánica, comprometiendo sus ecosistemas. La pérdida de especies y hábitats marinos se acelera por destrucción de entornos naturales, contaminación, sobreexplotación y los impactos del cambio climático.

En respuesta a esta problemática, el Marco Mundial de la Diversidad Biológica Kunming-Montreal (2022) es un acuerdo histórico. Uno de sus objetivos es el conocido como “30×30”: compromete a proteger al menos el 30 % de las áreas marinas para 2030. Una meta ambiciosa, dado que actualmente menos del 10 % del océano tiene protección formal.




Leer más:
Proteger el 30 % de los océanos no es suficiente


La creación de áreas marinas protegidas es crucial no solo para salvaguardar la biodiversidad, sino también para asegurar los servicios ecosistémicos vitales que proporciona el océano: regulación climática, suministro de alimento o absorción de carbono.

Por ejemplo, proteger ecosistemas ricos en biodiversidad y carbono, como las praderas de Posidonia oceanica o los sedimentos marinos no alterados, ofrece beneficios conjuntos para la mitigación y adaptación al cambio climático al absorber y almacenar carbono de la atmósfera. Estas soluciones basadas en la naturaleza son algunas de las estrategias más inmediatamente aplicables para abordar ambas crisis.




Leer más:
Las praderas submarinas almacenan más CO2 que los bosques: necesitamos protegerlas


Conflictos y desafíos

El dilema surge al intentar alcanzar ambos objetivos. El despliegue masivo de energías renovables marinas genera impactos ambientales y conflictos espaciales que pueden chocar frontalmente con la conservación de la biodiversidad.

El mar Mediterráneo, con más de 17 000 especies (28 % endémicas), es uno de los más vulnerables y fragmentados, ya bajo presión por contaminación, sobrepesca, turismo y tráfico marítimo. Añadir miles de infraestructuras energéticas en un espacio tan sensible intensifica los problemas, generando en muchas zonas una industrialización del espacio marino y costero.

El choque se produce principalmente por la competencia por el espacio: zonas de alto potencial energético (con mucho viento u oleaje) a menudo coinciden con áreas de alto valor ecológico. Además, existen impactos directos en la fauna marina (ruido, colisiones, vibraciones, etc.) y alteración o destrucción de hábitats marinos.

Finalmente, aún persisten grandes incógnitas sobre el impacto real de los macroproyectos en los ecosistemas. Sus efectos acumulativos y a largo plazo en aspectos cruciales como las corrientes atmosféricas y oceánicas o la productividad básica de los mares, son en gran medida desconocidos o insuficientemente estudiados. Ante tal incertidumbre, la prudencia nos exige aplicar el principio de precaución.

De momento, en el Mediterráneo no existen instalaciones eólicas, solo hay una prueba piloto en Francia, que cuenta con tres turbinas, aunque hay distintos proyectos aún sobre el papel. En un mar que ya está al límite, estas nuevas presiones plantean serias dudas sobre la compatibilidad de objetivos sin una planificación cuidadosa.




Leer más:
Los riesgos de la energía eólica para los ecosistemas marinos


Hacia la coexistencia sostenible

La buena noticia es que descarbonizar nuestra economía y proteger los océanos no tiene por qué ser incompatible; de hecho, son objetivos que se refuerzan. La clave reside en una planificación inteligente del espacio marino.

La herramienta fundamental para lograrlo es la planificación espacial marina (PEM). Este proceso organiza los usos del mar (energía, pesca y acuicultura, transporte, turismo, conservación) para identificar zonas de alto valor ecológico a proteger y áreas adecuadas para el desarrollo energético, minimizando conflictos. Es un mapa de ruta para una gestión integrada y multifuncional.

El objetivo debe ser un impacto neto positivo, de manera que los proyectos de energías renovables no solo minimicen el daño, sino que además contribuyan a la mejora ambiental de los ecosistemas. Esto se logra con mitigación efectiva de los efectos negativos, compensación y restauración ecológica.

Finalmente, la colaboración y el diálogo entre gobiernos, industria, pescadores, científicos y conservacionistas es indispensable. La consideración de las comunidades locales (pescadores, sector turístico, residentes costeros) es clave para una transición energética justa y equitativa. Solo trabajando juntos se encontrarán soluciones innovadoras que equilibren la energía renovable con la protección de la biodiversidad y los servicios ecosistémicos oceánicos.


Reciba artículos como este cada día en su buzón. 📩 Suscríbase gratis a nuestro boletín diario para tener acceso directo a la conversación del día de The Conversation en español.


Integrando descarbonización y conservación

La crisis climática y la pérdida de biodiversidad son dos caras de la misma moneda; abordarlas de forma aislada sería un error. La descarbonización de nuestra economía y la protección de la biodiversidad marina no solo deben coexistir, sino que deben reforzarse mutuamente.

Por eso, es crucial que la expansión de las energías renovables marinas se haga con una visión holística y proactiva, priorizando la salud de los ecosistemas e integrando soluciones basadas en la naturaleza desde el principio.

Podemos y debemos aprovechar el inmenso potencial energético del océano sin comprometer su salud y el bienestar de las comunidades locales. El futuro exige una simbiosis entre innovación tecnológica y ciencia, que aporta conocimientos sobre los impactos ecológicos y socioeconómicos locales.

Integrar la mitigación del cambio climático con la conservación de la biodiversidad en nuestras estrategias marinas es clave para lograr unas energías marinas sostenibles, es decir, para una verdadera economía azul.

The Conversation

Josep Lloret es Investigador Científico del CSIC. Este artículo ha sido realizado en el marco del proyecto BIOPAÍS, financiado por la Fundación Biodiversidad del Ministerio para la Transición Ecológica y el Reto Demográfico, en el marco del Plan de Recuperación, Transformación y Resiliència (PRTR), con el soporte de la Unión Europea – NextGenerationEU

Paul Wawrzynkowski no recibe salario, ni ejerce labores de consultoría, ni posee acciones, ni recibe financiación de ninguna compañía u organización que pueda obtener beneficio de este artículo, y ha declarado carecer de vínculos relevantes más allá del cargo académico citado.

ref. ¿Es compatible la energía eólica marina con la protección del océano? El caso mediterráneo – https://theconversation.com/es-compatible-la-energia-eolica-marina-con-la-proteccion-del-oceano-el-caso-mediterraneo-258538

En las redes sociales, los bulos sobre salud corren mucho más rápido que los hechos

Source: The Conversation – France – By Ivan Herrera Peco, Profesor e Investigador en Salud., Universidad Camilo José Cela

Los profesionales de la salud pueden contrarrestar la desinformación lanzando mensajes claros y cercanos. Nattakorn_Maneerat/Shutterstock

El conocimiento de la salud ha sufrido grandes cambios, pasando de la visión del médico como única fuente de información a la inmediatez del acceso a internet. Confiamos en la red para buscar una respuesta acorde a nuestras expectativas, ya sea en un vídeo corto, un tuit viral o una historia de Instagram.

Sin embargo, esta inmediatez tiene su lado oscuro –y a veces siniestro– cuando ayuda a difundir información no contrastada, o directamente mal intencionada. Tengamos en cuenta que los bulos corren mucho más rápido que los hechos.

Al hablar de desinformación en temas de salud, nos referimos a esos mensajes falsos, incompletos o engañosos que circulan por internet. A veces son intencionados, por intereses políticos o económicos, y otras nacen de malentendidos o bien se aprovechan de la falta de cultura digital de su público.

Graves consecuencias

Pero, en cualquier caso, pueden tener consecuencias muy graves: hay personas que dejan tratamientos, retrasan ir al médico, se automedican o prueban cosas peligrosas por lo que han visto en un vídeo.

Por ejemplo, en agosto de 2021, las llamadas por intoxicación con ivermectina a los centros de toxicología de EE. UU. aumentaron más del 200 % en solo cuatro semanas, tras viralizarse vídeos que la presentaban como un tratamiento definitivo contra el coronavirus.

La vacunación es precisamente un blanco favorito de la desinformación. Durante la pandemia se observó que más del 25 % de los vídeos más vistos sobre covid en YouTube contenían información falsa.

Y otro exponente ilustrativo son los vídeos sobre la llamada real food (comida real), que, como revela un estudio, llegan a proponer la dieta como única “cura”. En este caso también, la desinformación se propaga más rápido que los desmentidos.

La salud mental, en el punto de mira

Un capítulo aparte merece la salud mental. Aunque las redes funcionen como espacios de apoyo que facilitan el acceso a herramientas que permiten mejorar el autocuidado y el contacto con otras personas en un entorno seguro, también son caldo de cultivo para ideas erróneas y estigmas.

Es algo muy tangible en temas como la esquizofrenia. YouTube está plagado de testimonios que suenan convincentes, pero cuidado: un análisis de 100 vídeos en español revela que solo el 39 % cita estudios de verdad, y la nota media de fiabilidad apenas pasa del aprobado.

Y si revisamos TikTok o X, veremos que la mitad de los posts asocia esquizofrenia con “peligro”, dramatiza voces homicidas o se burla de las personas diagnosticadas de esquizofrenia.

Los algoritmos alimentan los bulos

De hecho, lo que se hace viral rara vez es lo más fiable, y muchas veces es justo lo contrario. Además, hay que añadir que los algoritmos de búsqueda están diseñados para mostrar contenidos con los que estamos más de acuerdo, obviando otros más auténticos.

En nuestras observaciones hemos visto que lo que arrasa en redes no es lo más riguroso, sino lo que más emociona, impacta o genera polémica. Por ejemplo, quienes difunden “zumos milagro” suelen esgrimir estudios preliminares sacados de contexto.

Muy significativo es el caso de un ensayo publicado en 2018 que circula de modo recurrente. Según sus conclusiones, 20 personas con diabetes tipo 2 que tomaron dos miligramos de zumo de noni por kilo de peso corporal al día durante ocho semanas lograron reducciones modestas pero significativas de glucemia. El propio artículo remarca que fue un estudio piloto, sin grupo placebo y con seguimiento breve. Insuficiente, por lo tanto, para afirmar que un zumo cure la diabetes.

Resulta complicado que se difundan este tipo de matizaciones. Un estudio reciente analizó cómo se habla de las dietas detox en internet y encontró que menos del 10 % de los mensajes intentan desmentir sus supuestos beneficios.

Muchos de esos contenidos echan mano de palabras como “toxinas” sin explicar bien a qué se refieren, lo que genera confusión. Mientras que quienes defienden estas prácticas la usan para justificar tratamientos sin evidencia, los expertos en salud lo critican por no tener base científica.

Esta mezcla de conceptos hace que muchas personas no sepan diferenciar entre información fiable y pseudociencia. Y mientras tanto, los vídeos y mensajes confusos o falsos tienen millones de visitas antes de que nadie lo desmienta. Para cuando los profesionales intentan corregir el error, el bulo ya ha echado raíces y su propagación es enorme y casi imparable.

Supuestos expertos sin formación

Además, mucho de este contenido no viene de expertos, sino de personas sin formación en salud.
Algunos solo buscan fama, y otros, dinero. ¿El resultado? Gente que confía en consejos peligrosos porque suenan fáciles y están bien presentados.

Los números lo dejan claro. Un análisis de 676 publicaciones de las cuentas de nutrición más influyentes de Australia detectó que el 44,7 % contenía errores y que las procedentes de marcas o “gurús” del fitness –sin formación sanitaria– lograban un 70 % más de interacciones pese a ser mucho menos rigurosas que las gestionadas por nutricionistas.

También ayuda la irresponsabilidad de algunas celebridades. Un estudio del año 2024 se centró en valorar como ciertos famosos pueden ayudar a la difusión de la desinformación “reinterpretando” los términos médicos o, incluso, a través de la ideología política.

En definitiva, existe una primacía de la rentabilidad y el aumento del tráfico sobre el rigor. De acuerdo con un revelador trabajo que revisó 22 experimentos, basta un lenguaje coloquial y una foto cuidada para que la audiencia otorgue credibilidad, incluso si el autor no tiene ningún tipo de formación o titulación relacionada con la salud.

La ciencia, en un segundo plano

Los profesionales del ámbito sanitario también están en las redes sociales, pero su voz no llega tan lejos. ¿Por qué? Porque no tienen formación en comunicación digital, ni el apoyo que necesitan de sus instituciones. Muchos hospitales aún no entienden cómo funcionan las redes ni qué tipo de contenido conecta con la gente.

Y claro, los mensajes médicos suelen ser largos, fríos o muy técnicos, mientras que los bulos se transmiten con una frase pegadiza, música de fondo y testimonios emotivos. No hace falta que haya evidencia científica real detrás porque apelan a las emociones o a discursos que suenan científicos pero no lo son.

No es que falte la verdad, es que falta saber cómo contarla.

La importancia de la educación y el pensamiento crítico

No podemos dejar que los bulos ocupen el hueco que la ciencia no ha sabido llenar. No se trata de competir con la mentira, sino de ofrecer una información mejor: clara, cercana y humana.

Es básico aprender a detectar esa información perjudicial. La educación en salud y el pensamiento crítico tienen que empezar desde pequeños. Y los profesionales, además de curar, deben estar preparados para informar.

The Conversation

Las personas firmantes no son asalariadas, ni consultoras, ni poseen acciones, ni reciben financiación de ninguna compañía u organización que pueda obtener beneficio de este artículo, y han declarado carecer de vínculos relevantes más allá del cargo académico citado anteriormente.

ref. En las redes sociales, los bulos sobre salud corren mucho más rápido que los hechos – https://theconversation.com/en-las-redes-sociales-los-bulos-sobre-salud-corren-mucho-mas-rapido-que-los-hechos-255893

Algoritmos en los juzgados, ¿una ayuda o un peligro?

Source: The Conversation – France – By Gema Marcilla Córdoba, Profesora Titular de Filosofía del Derecho, Universidad de Castilla-La Mancha

Alexander Limbach/Shutterstock

¿Y si la próxima sentencia que decida nuestro futuro la escribiera un programa tan complejo que ni sus creadores saben explicar del todo cómo razona?

El derecho ha ido siempre de la mano de la tecnología. Muchos hitos jurídicos han sido posibles debido a previos hitos tecnológicos: la escritura hizo posible la publicidad normativa; la imprenta multiplicó el acceso a las fuentes jurídicas; la informatización y digitalización han marcado a las profesiones jurídicas desde finales del siglo pasado. Y hoy en día, la inteligencia artificial –sobre todo, los grandes modelos de lenguaje– se sienta en el estrado.

Esta nueva “colaboradora” es poderosa: resume miles de folios en segundos e identifica patrones de decisión en la legislación, ka jurisprudencia y la doctrina con un celo y rapidez extraordinarios. También puede revisar las pruebas; por ejemplo, las grabaciones de testimonios, encontrando matices que pasan desapercibidos al humano.

Pero su entrada en la sala de vistas no es inocua.

La jurisdicción es, junto al legislador, un órgano vital del Estado de derecho. Su legitimidad descansa en resolver los casos particulares sobre la base exclusiva de normas jurídicas (normas preestablecidas). “El juez es la boca que pronuncia las palabras de la ley; seres inanimados que no pueden moderar ni la fuerza ni el rigor de las leyes”, escribía Montesquieu en 1748, en su obra El espíritu de las leyes.

La inevitable subjetividad humana

Pese al ideal de juez neutral, el juez real inevitablemente adolece de subjetividad. No hablamos aquí ni de politización de la justicia, ni mucho menos de prevaricación. Por más virtuoso que sea, interpretará la realidad jurídica desde sus esquemas cognitivos.

Los juristas, teóricos y prácticos, se han esforzado por preservar el principio de legalidad, es decir, por salvaguardar la imparcialidad y la independencia de los jueces. Aunque este objetivo sea inalcanzable al cien por cien debido a la irremediable subjetividad humana.

Resulta por ello paradójico que ahora, que gracias a la IA estamos cada vez más cerca de que un ser verdaderamente inanimado e impasible (un algoritmo) pueda hacer justicia, la Unión Europea no dude en calificar de “alto riesgo” a cualquier sistema de IA que ayude a los jueces y magistrados a interpretar hechos o normas (según recoge el artículo 6 del Reglamento 2024/1689 de Inteligencia Artificial).

El mensaje es contundente: la tecnología puede asistir, jamás sustituir a la decisión de un juez humano.

Sistemas de alto riesgo

La IA genera tanto entusiasmo como inquietudes y dudas. Ya en 2014, el filósofo sueco Nick Bostrom resaltaba que la automatización del aprendizaje, aspecto clave en la IA, puede conducir a una “explosión de inteligencia”.

La “superinteligencia artificial” implicaría no solo nuevos esquemas conceptuales incomprensibles para el humano, sino algoritmos libres, desvinculados de nuestros valores, de manera que las máquinas pudieran dirigir toda su actuación a objetivos absurdos o perjudiciales para la humanidad.

Sin necesidad de plantear un escenario a largo plazo o futurista como el que indica Bostrom, los riesgos presentes son palpables. En lo que respecta a la IA jurisdiccional, los modelos, a menudo, “alucinan”, llegando a inventar resoluciones de tribunales inexistentes. Asimismo, operan como “cajas negras”, sin trazabilidad ni explicabilidad de sus respuestas. E interactúan con quien los usa sin dejar rastro del legal prompting –preguntas introducidas, aplicando el procesamiento del lenguaje natural– que haya tenido lugar.

Además, heredan sesgos ideológicos de los datos con los que fueron entrenados. Es decir, conforman sus respuestas sobre la base del aprendizaje de premisas o patrones infundados en relación con la raza, el género, las tendencias sexuales, la capacidad adquisitiva, el nivel de endeudamiento, etc.

Junto con los problemas que suponen estos sesgos, es consustancial a la IA el sesgo estadístico, entendiendo por tal la prevalencia de los datos que estadísticamente tienen más peso. En lo que respecta al derecho, ello puede suponer una suerte de “clonación” del pasado jurisprudencial, congelando la evolución en la interpretación de las normas.

La última palabra para el juez

¿La solución? Diseño consciente y responsable de tales problemas y supervisión humana. Para ello resulta esencial un marco de trabajo pluridisciplinar, de estrecha colaboración entre expertos en sistemas de IA y expertos en derecho.

El Reglamento de IA exige documentación técnica exhaustiva, auditorías de sesgo y la posibilidad de recrear cada paso del razonamiento jurídico algorítmico. La Carta de la Comisión Europea por la Eficacia en la Justicia (CEPEJ), de 2018, o el Dictamen XXIV de la Comisión Iberoamericana de Ética Judicial recuerdan que toda recomendación generada por un sistema automático debe ser revisada críticamente por el juez, quien conserva la última palabra y la carga de motivar su decisión.

Nada de esto significa demonizar la tecnología. Bien gobernada, la inteligencia artificial libera tiempo, tan necesario para que la tutela judicial sea verdaderamente efectiva. Pero si se despliega sin salvaguardas, la promesa de efectividad puede mutar en una justicia algorítmica “mecanicista” en el peor de los sentidos, pues no por ser una máquina sería objetiva, sino aleatoria y opaca en la elección de las normas que aplica.

Un cerebro sintético que nunca se cansa no es preferible a un juez de carne y hueso –con la experiencia de las complejidades de la práctica jurídica–. Más bien el juez de nuestros días debe ser tanto un experto en el derecho como en el algoritmo que le ayuda a aplicarlo.

The Conversation

Gema Marcilla Córdoba no recibe salario, ni ejerce labores de consultoría, ni posee acciones, ni recibe financiación de ninguna compañía u organización que pueda obtener beneficio de este artículo, y ha declarado carecer de vínculos relevantes más allá del cargo académico citado.

ref. Algoritmos en los juzgados, ¿una ayuda o un peligro? – https://theconversation.com/algoritmos-en-los-juzgados-una-ayuda-o-un-peligro-257746

Using TikTok could be making you more politically polarized, new study finds

Source: The Conversation – USA (2) – By Zicheng Cheng, Assistant Professor of Mass Communications, University of Arizona

Are you in an echo chamber on TikTok? LeoPatrizi/E+ via Getty Images

People on TikTok tend to follow accounts that align with their own political beliefs, meaning the platform is creating political echo chambers among its users. These findings, from a study my collaborators, Yanlin Li and Homero Gil de Zúñiga, and I published in the academic journal New Media & Society, show that people mostly hear from voices they already agree with.

We analyzed the structure of different political networks on TikTok and found that right-leaning communities are more isolated from other political groups and from mainstream news outlets. Looking at their internal structures, the right-leaning communities are more tightly connected than their left-leaning counterparts. In other words, conservative TikTok users tend to stick together. They rarely follow accounts with opposing views or mainstream media accounts. Liberal users, on the other hand, are more likely to follow a mix of accounts, including those they might disagree with.

Our study is based on a massive dataset of over 16 million TikTok videos from more than 160,000 public accounts between 2019 and 2023. We saw a spike of political TikTok videos during the 2020 U.S. presidential election. More importantly, people aren’t just passively watching political content; they’re actively creating political content themselves.

Some people are more outspoken about politics than others. We found that users with stronger political leanings and those who get more likes and comments on their videos are more motivated to keep posting. This shows the power of partisanship, but also the power of TikTok’s social rewards system. Engagement signals – likes, shares, comments – are like a fuel, encouraging users to create even more.

Why it matters

People are turning to TikTok not just for a good laugh. A recent Pew Research Center survey shows that almost 40% of U.S. adults under 30 regularly get news on TikTok. The question becomes what kind of news are they watching, and what does that mean for how they engage with politics.

The content on TikTok often comes from creators and influencers or digital-native media sources. The quality of this news content remains uncertain. Without access to balanced, fact-based information, people may struggle to make informed political decisions.

TikTok is not unique; social media generally fosters polarization.

Amid the debates over banning TikTok, our study highlights how TikTok can be a double-edged sword in political communication. It’s encouraging to see people participate in politics through TikTok when that’s their medium of choice. However, if a user’s network is closed and homogeneous and their expression serves as in-group validation, it may further solidify the political echo chamber.

When people are exposed to one-sided messages, it can increase hostility toward outgroups. In the long run, relying on TikTok as a source for political information might deepen people’s political views and contribute to greater polarization.

What other research is being done

Echo chambers have been widely studied on platforms like Twitter and Facebook, but similar research on TikTok is in its infancy. TikTok is drawing scrutiny, particularly its role in news production, political messaging and social movements.

TikTok has its unique format, algorithmic curation and entertainment-driven design. I believe that its function as a tool for political communication calls for closer examination.

What’s next

In 2024, the Biden/Harris and Trump campaigns joined TikTok to reach young voters. My research team is now analyzing how these political communication dynamics may have shifted during the 2024 election. Future research could use experiments to explore whether these campaign videos significantly influence voters’ perceptions and behaviors.

The Research Brief is a short take on interesting academic work.

The Conversation

Zicheng Cheng does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Using TikTok could be making you more politically polarized, new study finds – https://theconversation.com/using-tiktok-could-be-making-you-more-politically-polarized-new-study-finds-258791

Mitochondria can sense bacteria and trigger your immune system to trap them – revealing new ways to treat infections and autoimmunity 

Source: The Conversation – USA (2) – By Andrew Monteith, Assistant Professor of Microbiology, University of Tennessee

Neutrophils (yellow) eject a NET (green) to ensnare bacteria (purple). Other cells, such as red blood cells (orange), may also get trapped. CHDENK/Wikimedia Commons, CC BY-SA

Mitochondria have primarily been known as the energy-producing components of cells. But scientists are increasingly discovering that these small organelles do much more than just power cells. They are also involved in immune functions such as controlling inflammation, regulating cell death and responding to infections.

Research from my colleagues and I revealed that mitochondria play another key role in your immune response: sensing bacterial activity and helping neutrophils, a type of white blood cell, trap and kill them.

For the past 16 years, my research has focused on understanding the decisions immune cells make during infection and how the breakdown of these decision-making processes cause disease. My lab’s recent findings shed light on why people with autoimmune diseases such as lupus may struggle to fight infections, revealing a potential link between dysfunctional mitochondria and weakened immune defenses.

Side-by-side comparison of labeled illustration of cross-section of mitochondria and its micrograph
Mitochondria do so much more than just produce energy.
OpenStax, CC BY-SA

The immune system’s secret weapons

Neutrophils are the most abundant type of immune cell and serve as the immune system’s first responders. One of their key defense mechanisms is releasing neutrophil extracellular traps, or NETs – weblike structures composed of DNA and antimicrobial proteins. These sticky NETs trap and neutralize invading microbes, preventing their spread in the body.

Until recently, scientists believed that NET formation was primarily triggered by cellular stress and damage. However, our study found that mitochondria can detect a specific bacterial byproduct – lactate – and use that signal to initiate NET formation.

Lactate is commonly associated with muscle fatigue in people. But in the context of bacterial infections, it plays a different role. Many bacteria release lactate as part of their own energy production. My team found that once bacteria are engulfed by a compartment of the cell called the phagosome, neutrophils can sense the presence of this lactate.

Inside the phagosome, this lactate communicates to the neutrophil that bacteria are present and that the antibacterial processes are not sufficient to kill these pathogens. When the mitochondria in neutrophil cells detect this lactate, they start signaling for the cell to get rid of the NETs that have entrapped bacteria. Once the bacteria are released outside the cell, other immune cells can kill them.

Here, a neutrophil engulfs MRSA bacteria (green).

When we blocked the mitochondria’s ability to sense lactate, neutrophils failed to produce NETs effectively. This meant bacteria were more likely to escape capture and proliferate, showing how crucial this mechanism is to immune defense. This process highlights an intricate dialogue between the bacteria’s metabolism and the host cell’s energy machinery.

What makes this finding surprising is that the mitochondria within cells are able to detect bacteria trapped in phagosomes, even though the microbes are enclosed in a separate space. Somehow, mitochondrial sensors can pick up cues from within these compartments – an impressive feat of cellular coordination.

Targeting mitochondria to fight infections

Our study is part of a growing field called immunometabolism, which explores how metabolism and immune function are deeply intertwined. Rather than viewing cellular metabolism as strictly a means to generate energy, researchers are now recognizing it as a central driver of immune decisions.

Mitochondria sit at the heart of this interaction. Their ability to sense, respond to and even shape the metabolic environment of a cell gives them a critical role in determining how and when immune responses are deployed.

For example, our findings provide a key reason why patients with a chronic autoimmune disease called systemic lupus erythematosus often suffer from recurrent infections. Mitochondria in the neutrophils of lupus patients fail to sense bacterial lactate properly. As a result, NET production was significantly reduced. This mitochondrial dysfunction could explain why lupus patients are more vulnerable to bacterial infections – even though their immune systems are constantly activated due to the disease.

This observation points to mitochondria’s central role in balancing immune responses. It connects two seemingly unrelated issues: immune overactivity, as seen in lupus, and immune weakness like increased susceptibility to infection. When mitochondria work correctly, they help neutrophils mount an effective, targeted attack on bacteria. But when mitochondria are impaired, this system breaks down.

Microscopy image of long threads extending from round blobs
Neutrophils unable to effectively produce NETs may contribute to the development of lupus.
Luz Blanco/National Institute of Arthritis and Musculoskeletal and Skin Diseases via Flickr, CC BY-NC-SA

Our discovery that mitochondria can sense bacterial lactate to trigger NET formation opens up new possibilities for treating infections. For instance, drugs that enhance mitochondrial sensing could boost NET production in people with weakened immune systems. On the flip side, for conditions where NETs contribute to tissue damage – such as in severe COVID-19 or autoimmune diseases – it might be beneficial to limit this response.

Additionally, our study raises the question of whether other immune cells use similar mechanisms to sense microbial metabolites, and whether other bacterial byproducts might serve as immune signals. Understanding these pathways in more detail could lead to new treatments that modulate immune responses more precisely, reducing collateral damage while preserving antimicrobial defenses.

Mitochondria are not just the powerhouses of the cell – they are the immune system’s watchtowers, alert to even the faintest metabolic signals of bacterial invaders. As researchers’ understanding of their roles expands, so too does our appreciation for the complexity – and adaptability – of our cellular defenses.

The Conversation

Andrew Monteith receives funding from the National Institute of Health.

ref. Mitochondria can sense bacteria and trigger your immune system to trap them – revealing new ways to treat infections and autoimmunity  – https://theconversation.com/mitochondria-can-sense-bacteria-and-trigger-your-immune-system-to-trap-them-revealing-new-ways-to-treat-infections-and-autoimmunity-255939

The Vera C. Rubin Observatory will help astronomers investigate dark matter, continuing the legacy of its pioneering namesake

Source: The Conversation – USA (2) – By Samantha Thompson, Astronomy Curator, National Air and Space Museum, Smithsonian Institution

The Rubin Observatory is scheduled to release its first images in 2025. RubinObs/NOIRLab/SLAC/NSF/DOE/AURA/B. Quint

Everything in space – from the Earth and Sun to black holes – accounts for just 15% of all matter in the universe. The rest of the cosmos seems to be made of an invisible material astronomers call dark matter.

Astronomers know dark matter exists because its gravity affects other things, such as light. But understanding what dark matter is remains an active area of research.

With the release of its first images this month, the Vera C. Rubin Observatory has begun a 10-year mission to help unravel the mystery of dark matter. The observatory will continue the legacy of its namesake, a trailblazing astronomer who advanced our understanding of the other 85% of the universe.

As a historian of astronomy, I’ve studied how Vera Rubin’s contributions have shaped astrophysics. The observatory’s name is fitting, given that its data will soon provide scientists with a way to build on her work and shed more light on dark matter.

Wide view of the universe

From its vantage point in the Chilean Andes mountains, the Rubin Observatory will document everything visible in the southern sky. Every three nights, the observatory and its 3,200 megapixel camera will make a record of the sky.

This camera, about the size of a small car, is the largest digital camera ever built. Images will capture an area of the sky roughly 45 times the size of the full Moon. With a big camera with a wide field of view, Rubin will produce about five petabytes of data every year. That’s roughly 5,000 years’ worth of MP3 songs.

After weeks, months and years of observations, astronomers will have a time-lapse record revealing anything that explodes, flashes or moves – such as supernovas, variable stars or asteroids. They’ll also have the largest survey of galaxies ever made. These galactic views are key to investigating dark matter.

Galaxies are the key

Deep field images from the Hubble Space Telescope, the James Webb Space Telescope and others have visually revealed the abundance of galaxies in the universe. These images are taken with a long exposure time to collect the most light, so that even very faint objects show up.

Researchers now know that those galaxies aren’t randomly distributed. Gravity and dark matter pull and guide them into a structure that resembles a spider’s web or a tub of bubbles. The Rubin Observatory will expand upon these previous galactic surveys, increasing the precision of the data and capturing billions more galaxies.

In addition to helping structure galaxies throughout the universe, dark matter also distorts the appearance of galaxies through an effect referred to as gravitational lensing.

Light travels through space in a straight line − unless it gets close to something massive. Gravity bends light’s path, which distorts the way we see it. This gravitational lensing effect provides clues that could help astronomers locate dark matter. The stronger the gravity, the bigger the bend in light’s path.

Many galaxies, represented as bright dots, some blurred, against a dark background.
The white galaxies seen here are bound in a cluster. The gravity from the galaxies and the dark matter bends the light from the more distant galaxies, creating contorted and magnified images of them.
NASA, ESA, CSA and STScI

Discovering dark matter

For centuries, astronomers tracked and measured the motion of planets in the solar system. They found that all the planets followed the path predicted by Newton’s laws of motion, except for Uranus. Astronomers and mathematicians reasoned that if Newton’s laws are true, there must be some missing matter – another massive object – out there tugging on Uranus. From this hypothesis, they discovered Neptune, confirming Newton’s laws.

With the ability to see fainter objects in the 1930s, astronomers began tracking the motions of galaxies.

California Institute of Technology astronomer Fritz Zwicky coined the term dark matter in 1933, after observing galaxies in the Coma Cluster. He calculated the mass of the galaxies based on their speeds, which did not match their mass based on the number of stars he observed.

He suspected that the cluster could contain an invisible, missing matter that kept the galaxies from flying apart. But for several decades he lacked enough observational evidence to support his theory.

A woman adjusting a large piece of equipment.
Vera Rubin operates the Carnegie spectrograph at Kitt Peak National Observatory in Tucson.
Carnegie Institution for Science, CC BY

Enter Vera Rubin

In 1965, Vera Rubin became the first women hired onto the scientific staff at the Carnegie Institution’s Department of Terrestrial Magnetism in Washington, D.C.

She worked with Kent Ford, who had built an extremely sensitive spectrograph and was looking to apply it to a scientific research project. Rubin and Ford used the spectrograph to measure how fast stars orbit around the center of their galaxies.

In the solar system, where most of the mass is within the Sun at the center, the closest planet, Mercury, moves faster than the farthest planet, Neptune.

“We had expected that as stars got farther and farther from the center of their galaxy, they would orbit slower and slower,” Rubin said in 1992.

What they found in galaxies surprised them. Stars far from the galaxy’s center were moving just as fast as stars closer in.

“And that really leads to only two possibilities,” Rubin explained. “Either Newton’s laws don’t hold, and physicists and astronomers are woefully afraid of that … (or) stars are responding to the gravitational field of matter which we don’t see.”

Data piled up as Rubin created plot after plot. Her colleagues didn’t doubt her observations, but the interpretation remained a debate. Many people were reluctant to accept that dark matter was necessary to account for the findings in Rubin’s data.

Rubin continued studying galaxies, measuring how fast stars moved within them. She wasn’t interested in investigating dark matter itself, but she carried on with documenting its effects on the motion of galaxies.

A quarter with a woman looking upwards engraved onto it.
A U.S quarter honors Vera Rubin’s contributions to our understanding of dark matter.
United States Mint, CC BY

Vera Rubin’s legacy

Today, more people are aware of Rubin’s observations and contributions to our understanding of dark matter. In 2019, a congressional bill was introduced to rename the former Large Synoptic Survey Telescope to the Vera C. Rubin Observatory. In June 2025, the U.S. Mint released a quarter featuring Vera Rubin.

Rubin continued to accumulate data about the motions of galaxies throughout her career. Others picked up where she left off and have helped advance dark matter research over the past 50 years.

In the 1970s, physicist James Peebles and astronomers Jeremiah Ostriker and Amos Yahil created computer simulations of individual galaxies. They concluded, similarly to Zwicky, that there was not enough visible matter in galaxies to keep them from flying apart.

They suggested that whatever dark matter is − be it cold stars, black holes or some unknown particle − there could be as much as 10 times the amount of dark matter than ordinary matter in galaxies.

Throughout its 10-year run, the Rubin Observatory should give even more researchers the opportunity to add to our understanding of dark matter.

The Conversation

Samantha Thompson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. The Vera C. Rubin Observatory will help astronomers investigate dark matter, continuing the legacy of its pioneering namesake – https://theconversation.com/the-vera-c-rubin-observatory-will-help-astronomers-investigate-dark-matter-continuing-the-legacy-of-its-pioneering-namesake-259233

Neuropathic pain has no immediate cause – research on a brain receptor may help stop this hard-to-treat condition

Source: The Conversation – USA (2) – By Pooja Shree Chettiar, Ph.D. Candidate in Medical Sciences, Texas A&M University

Neuropathic pain is experienced both physically and emotionally. Salim Hanzaz/iStock via Getty Images

Pain is easy to understand until it isn’t. A stubbed toe or sprained ankle hurts, but it makes sense because the cause is clear and the pain fades as you heal.

But what if the pain didn’t go away? What if even a breeze felt like fire, or your leg burned for no reason at all? When pain lingers without a clear cause, that’s neuropathic pain.

We are neuroscientists who study how pain circuits in the brain and spinal cord change over time. Our work focuses on the molecules that quietly reshape how pain is felt and remembered.

We didn’t fully grasp how different neuropathic pain was from injury-related pain until we began working in a lab studying it. Patients spoke of a phantom pain that haunted them daily – unseen, unexplained and life-altering.

These conversations shifted our focus from symptoms to mechanisms. What causes this ghost pain to persist, and how can we intervene at the molecular level to change it?

More than just physical pain

Neuropathic pain stems from damage to or dysfunction in the nervous system itself. The system that was meant to detect pain becomes the source of it, like a fire alarm going off without a fire. Even a soft touch or breeze can feel unbearable.

Neuropathic pain doesn’t just affect the body – it also alters the brain. Chronic pain of this nature often leads to depression, anxiety, social isolation and a deep sense of helplessness. It can make even the most routine tasks feel unbearable.

About 10% of the U.S. population – tens of millions of people – experience neuropathic pain, and cases are rising as the population ages. Complications from diabetes, cancer treatments or spinal cord injuries can lead to this condition. Despite its prevalence, doctors often overlook neuropathic pain because its underlying biology is poorly understood.

Person lying on side in bed, eyes closed, possibly grimacing
Neuropathic pain can be debilitating.
Kate Wieser/Moment via Getty Images

There’s also an economic cost to neuropathic pain. This condition contributes to billions of dollars in health care spending, missed workdays and lost productivity. In the search for relief, many turn to opioids, a path that, as seen from the opioid epidemic, can carry its own devastating consequences through addiction.

GluD1: A quiet but crucial player

Finding treatments for neuropathic pain requires answering several questions. Why does the nervous system misfire in this way? What exactly causes it to rewire in ways that increase pain sensitivity or create phantom sensations? And most urgently: Is there a way to reset the system?

This is where our lab’s work and the story of a receptor called GluD1 comes in. Short for glutamate delta-1 receptor, this protein doesn’t usually make headlines. Scientists have long considered GluD1 a biochemical curiosity, part of the glutamate receptor family, but not known to function like its relatives that typically transmit electrical signals in the brain.

Instead, GluD1 plays a different role. It helps organize synapses, the junctions where neurons connect. Think of it as a construction foreman: It doesn’t send messages itself, but directs where connections form and how strong they become.

This organizing role is critical in shaping the way neural circuits develop and adapt, especially in regions involved in pain and emotion. Our lab’s research suggests that GluD1 acts as a molecular architect of pain circuits, particularly in conditions like neuropathic pain where those circuits misfire or rewire abnormally. In parts of the nervous system crucial for pain processing like the spinal cord and amygdala, GluD1 may shape how people experience pain physically and emotionally.

Fixing the misfire

Across our work, we found that disruptions to GluD1 activity is linked to persistent pain. Restoring GluD1 activity can reduce pain. The question is, how exactly does GluD1 reshape the pain experience?

In our first study, we discovered that GluD1 doesn’t operate solo. It teams up with a protein called cerebellin-1 to form a structure that maintains constant communication between brain cells. This structure, called a trans-synaptic bridge, can be compared to a strong handshake between two neurons. It makes sure that pain signals are appropriately processed and filtered.

But in chronic pain, the bridge between these proteins becomes unstable and starts to fall apart. The result is chaotic. Like a group chat where everyone is talking at once and nobody can be heard clearly, neurons start to misfire and overreact. This synaptic noise turns up the brain’s pain sensitivity, both physically and emotionally. It suggests that GluD1 isn’t just managing pain signals, but also may be shaping how those signals feel.

What if we could restore that broken connection?

Resembling paint splatter, a round glob of green, yellow and red is superimposed on each other and surrounded by flecks of these same colors
This image highlights the presence of GluD1, in green and yellow, in a neuron of the central amygdala, in red.
Pooja Shree Chettiar and Siddhesh Sabnis/Dravid Lab at Texas A&M University, CC BY-SA

In our second study, we injected mice with cerebellin-1 and saw that it reactivated GluD1 activity, easing their chronic pain without producing any side effects. It helped the pain processing system work again without the sedative effects or disruptions to other nerve signals that are common with opioids. Rather than just numbing the body, reactivating GluD1 activity recalibrated how the brain processes pain.

Of course, this research is still in the early stages, far from clinical trials. But the implications are exciting: GluD1 may offer a way to repair the pain processing network itself, with fewer side effects and less risk of addiction than current treatments.

For millions living with chronic pain, this small, peculiar receptor may open the door to a new kind of relief: one that heals the system, not just masks its symptoms.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

ref. Neuropathic pain has no immediate cause – research on a brain receptor may help stop this hard-to-treat condition – https://theconversation.com/neuropathic-pain-has-no-immediate-cause-research-on-a-brain-receptor-may-help-stop-this-hard-to-treat-condition-256982

How artificial intelligence controls your health insurance coverage

Source: The Conversation – USA (2) – By Jennifer D. Oliva, Professor of Law, Indiana University

Evidence suggests that insurance companies use AI to delay or limit health care that patients need. FatCameraE+ via Getty Images

Over the past decade, health insurance companies have increasingly embraced the use of artificial intelligence algorithms. Unlike doctors and hospitals, which use AI to help diagnose and treat patients, health insurers use these algorithms to decide whether to pay for health care treatments and services that are recommended by a given patient’s physicians.

One of the most common examples is prior authorization, which is when your doctor needs to
receive payment approval from your insurance company before providing you care. Many insurers use an algorithm to decide whether the requested care is “medically necessary” and should be covered.

These AI systems also help insurers decide how much care a patient is entitled to — for example, how many days of hospital care a patient can receive after surgery.

If an insurer declines to pay for a treatment your doctor recommends, you usually have three options. You can try to appeal the decision, but that process can take a lot of time, money and expert help. Only 1 in 500 claim denials are appealed. You can agree to a different treatment that your insurer will cover. Or you can pay for the recommended treatment yourself, which is often not realistic because of high health care costs.

As a legal scholar who studies health law and policy, I’m concerned about how insurance algorithms affect people’s health. Like with AI algorithms used by doctors and hospitals, these tools can potentially improve care and reduce costs. Insurers say that AI helps them make quick, safe decisions about what care is necessary and avoids wasteful or harmful treatments.

But there’s strong evidence that the opposite can be true. These systems are sometimes used to delay or deny care that should be covered, all in the name of saving money.

A pattern of withholding care

Presumably, companies feed a patient’s health care records and other relevant information into health care coverage algorithms and compare that information with current medical standards of care to decide whether to cover the patient’s claim. However, insurers have refused to disclose how these algorithms work in making such decisions, so it is impossible to say exactly how they operate in practice.

Using AI to review coverage saves insurers time and resources, especially because it means fewer medical professionals are needed to review each case. But the financial benefit to insurers doesn’t stop there. If an AI system quickly denies a valid claim, and the patient appeals, that appeal process can take years. If the patient is seriously ill and expected to die soon, the insurance company might save money simply by dragging out the process in the hope that the patient dies before the case is resolved.

Insurers say that if they decline to cover a medical intervention, patients can pay for it out of pocket.

This creates the disturbing possibility that insurers might use algorithms to withhold care for expensive, long-term or terminal health problems , such as chronic or other debilitating disabilities. One reporter put it bluntly: “Many older adults who spent their lives paying into Medicare now face amputation or cancer and are forced to either pay for care themselves or go without.”

Research supports this concern – patients with chronic illnesses are more likely to be denied coverage and suffer as a result. In addition, Black and Hispanic people and those of other nonwhite ethnicities, as well as people who identify as lesbian, gay, bisexual or transgender, are more likely to experience claims denials. Some evidence also suggests that prior authorization may increase rather than decrease health care system costs.

Insurers argue that patients can always pay for any treatment themselves, so they’re not really being denied care. But this argument ignores reality. These decisions have serious health consequences, especially when people can’t afford the care they need.

Moving toward regulation

Unlike medical algorithms, insurance AI tools are largely unregulated. They don’t have to go through Food and Drug Administration review, and insurance companies often say their algorithms are trade secrets.

That means there’s no public information about how these tools make decisions, and there’s no outside testing to see whether they’re safe, fair or effective. No peer-reviewed studies exist to show how well they actually work in the real world.

There does seem to be some momentum for change. The Centers for Medicare & Medicaid Services, or CMS, which is the federal agency in charge of Medicare and Medicaid, recently announced that insurers in Medicare Advantage plans must base decisions on the needs of individual patients – not just on generic criteria. But these rules still let insurers create their own decision-making standards, and they still don’t require any outside testing to prove their systems work before using them. Plus, federal rules can only regulate federal public health programs like Medicare. They do not apply to private insurers who do not provide federal health program coverage.

Some states, including Colorado, Georgia, Florida, Maine and Texas, have proposed laws to rein in insurance AI. A few have passed new laws, including a 2024 California statute that requires a licensed physician to supervise the use of insurance coverage algorithms.

But most state laws suffer from the same weaknesses as the new CMS rule. They leave too much control in the hands of insurers to decide how to define “medical necessity” and in what contexts to use algorithms for coverage decisions. They also don’t require those algorithms to be reviewed by neutral experts before use. And even strong state laws wouldn’t be enough, because states generally can’t regulate Medicare or insurers that operate outside their borders.

A role for the FDA

In the view of many health law experts, the gap between insurers’ actions and patient needs has become so wide that regulating health care coverage algorithms is now imperative. As I argue in an essay to be published in the Indiana Law Journal, the FDA is well positioned to do so.

The FDA is staffed with medical experts who have the capability to evaluate insurance algorithms before they are used to make coverage decisions. The agency already reviews many medical AI tools for safety and effectiveness. FDA oversight would also provide a uniform, national regulatory scheme instead of a patchwork of rules across the country.

Some people argue that the FDA’s power here is limited. For the purposes of FDA regulation, a medical device is defined as an instrument “intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease.” Because health insurance algorithms are not used to diagnose, treat or prevent disease, Congress may need to amend the definition of a medical device before the FDA can regulate those algorithms.

If the FDA’s current authority isn’t enough to cover insurance algorithms, Congress could change the law to give it that power. Meanwhile, CMS and state governments could require independent testing of these algorithms for safety, accuracy and fairness. That might also push insurers to support a single national standard – like FDA regulation – instead of facing a patchwork of rules across the country.

The move toward regulating how health insurers use AI in determining coverage has clearly begun, but it is still awaiting a robust push. Patients’ lives are literally on the line.

The Conversation

Jennifer D. Oliva currently receives funding from NIDA to research the impact of pharmaceutical industry messaging on the opioid crisis among U.S. Military Veterans. She is affiliated with the UCSF/University of California College of the Law, San Francisco Consortium on Law, Science & Health Policy and Georgetown University Law Center O’Neill Institute for National & Global Health Law.

ref. How artificial intelligence controls your health insurance coverage – https://theconversation.com/how-artificial-intelligence-controls-your-health-insurance-coverage-253602

Who’s the most American? Psychological studies show that many people are biased and think it’s a white English speaker

Source: The Conversation – USA (2) – By Katherine Kinzler, Professor of Psychology, University of Chicago

Some people have a narrow view of who is American. The Good Brigade/DigitalVision via Getty Images

In the U.S. and elsewhere, nationality tends to be defined by a set of legal parameters. This may involve birthplace, parental citizenship or procedures for naturalization.

Yet in many Americans’ minds these objective notions of citizenship are a little fuzzy, as social and developmental psychologists like me have documented. Psychologically, some people may just seem a little more American than others, based on factors such as race, ethnicity or language.

Reinforced by identity politics, this results in different ideas about who is welcome, who is tolerated and who is made to not feel welcome at all.

How race affects who belongs

Many people who explicitly endorse egalitarian ideals, such as the notion that all Americans are deserving of the rights of citizenship regardless of race, still implicitly harbor prejudices over who’s “really” American.

In a classic 2005 study, American adults across racial groups were fastest to associate the concept of “American” with white people. White, Black and Asian American adults were asked whether they endorse equality for all citizens. They were then presented with an implicit association test in which participants matched different faces with the categories “American” or “foreign.” They were told that every face was a U.S. citizen.

White and Asian participants responded most quickly in matching the white faces with “American,” even when they initially expressed egalitarian values. Black Americans implicitly saw Black and white faces as equally American – though they too implicitly viewed Asian faces as being less American.

Similarly, in a 2010 study, several groups of American adults implicitly considered British actress Kate Winslet to be more American than U.S.-born Lucy Liu – even though they were aware of their actual nationalities.

Importantly, the development of prejudice can even include feelings that disadvantage one’s own group. This can be seen when Asian Americans who took part in the studies found white faces to be more American than Asian faces. A related 2010 study found that Hispanic participants were also more likely to associate whiteness with “Americanness.”

an image of white british actress kate winslet sits next to one of asian-american actress lucy liu
Who’s the American?
AP Photo

Language and nationality

These biased views of nationality begin at a young age – and spoken language can often be a primary identifier of who is in which group, as I show in my book “How You Say It.”

Although the U.S. traditionally has not had a national language, many Americans feel that English is critical to being a “true American.” And the president recently released an executive order claiming to designate English as the official language.

In a 2017 study conducted by my research team and led by psychologist Jasmine DeJesus, we gave children a simple task: After viewing a series of faces that varied in skin color and listening to those people speak, children were asked to guess their nationality. The faces were either white- or Asian-looking and spoke either English or Korean. “Is this person American or Korean?” we asked.

We recruited three groups of children for the study: white American children who spoke only English, children in South Korea who spoke only Korean, and Korean American children who spoke both languages. The ages of the children were either 5-6 or 9-10.

The vast majority of the younger monolingual children identified nationality with language, describing English speakers as American and Korean speakers as Korean – even though both groups were divided equally between people who looked white or Asian.

As for the younger bilingual children, they had parents whose first language was Korean, not English, and who lived in the United States. Yet, just like the monolingual children, they thought that the English speakers, and not the Korean speakers, were the Americans.

As they age, however, children increasingly view racial characteristics as an integral part of nationality. By the age of 9, we found that children were considering the white English speakers to be the most American, compared with Korean speakers who looked white or English speakers who looked Asian.

Interestingly, this impact was more pronounced in the older children we recruited in South Korea.

Deep roots

So it seems that for children and adults alike, assessments of what it means to be American hinge on certain traits that have nothing to do with the actual legal requirements for citizenship. Neither whiteness nor fluency in English is a requirement to become American.

And this bias has consequences. Research has found that the degree to which people link whiteness with Americanness is related to their discriminatory behaviors in hiring or questioning others’ loyalty.

That we find these biases in children does not mean they are in any way absolute. We know that children begin to pick up on these types of biased cultural cues and values at a young age. It does mean, however, that these biases have deep roots in our psychology.

Understanding that biases exist may make it easier to correct them. So Americans celebrating the Fourth of July perhaps should ponder what it means to be an American – and whether social biases distort your beliefs about who belongs.

This is an updated version of an article originally published on July 2, 2020.

The Conversation

Katherine Kinzler receives funding from the National Science Foundation.

ref. Who’s the most American? Psychological studies show that many people are biased and think it’s a white English speaker – https://theconversation.com/whos-the-most-american-psychological-studies-show-that-many-people-are-biased-and-think-its-a-white-english-speaker-256418

Supreme Court upholds childproofing porn sites

Source: The Conversation – USA (2) – By Meg Leta Jones, Associate Professor of Technology Law & Policy, Georgetown University

The Supreme Court greenlights states’ efforts to block kids from online porn by requiring age verification. AP Photo/J. Scott Applewhite

The U.S. Supreme Court handed down a decision on June 27, 2025, that will reshape how states protect children online. In a case assessing a Texas law requiring age verification to access porn sites, the court created a new legal path that makes it easier for states to craft laws regulating what kids see and do on the internet.

In a 6-3 decision, the court ruled in Free Speech Coalition Inc. v. Paxton that Texas’ law obligating porn sites to block access to underage users is constitutional. The law requires pornographic websites to verify users’ ages – for example by making users scan and upload their driver’s license – before granting access to content that is deemed obscene for minors but not adults.

The majority on the court rejected both the porn industry’s argument for strict scrutiny – the toughest legal test that requires the government to prove a law is absolutely necessary – and Texas’ argument for mere rational basis review, which requires only a rational connection between the law’s legitimate aims and its actions. Instead, Justice Clarence Thomas’ opinion established intermediate scrutiny, a middle ground that requires laws to serve important government interests without being overly burdensome, as the appropriate standard.

The court’s reasoning hinged on characterizing the law as only “incidentally” burdening adults’ First Amendment rights. Since minors have no constitutional right to access pornography, the state can require age verification to prevent that unprotected activity. Any burden on adults is, according to the ruling, merely a side effect of this legitimate regulation.

The court also pointed to dramatic technological changes since earlier similar laws were struck down in the 1990s and early 2000s. Back then, only 2 in 5 households had internet access, mostly through slow dial-up connections on desktop computers. Today, 95% of teens carry smartphones with constant internet access to massive libraries of content. Porn site Pornhub alone published over 150 years of new material in 2019. The court argued that earlier decisions “could not have conceived of these developments,” making age verification more necessary than judges could have imagined decades ago.

More importantly for future legislation, the court embraced an “ordinary and appropriate means” doctrine: When states have authority to govern an area, they may use traditional methods to exercise that power. Since age verification is common for alcohol and tobacco, tattoos and piercings, firearms, driver’s licenses and voting, the court held that it’s similarly appropriate for regulating minors’ access to sexual content.

The key takeaway: When states are trying to keep kids away from certain types of content that kids have no legal right to see anyway, requiring age verification is an ordinary and appropriate way to enforce that boundary.

Implications for other laws

This decision could resolve a fundamental enforcement problem in child privacy laws. Current laws like the Children’s Online Privacy Protection Act protect children only when companies have actual knowledge a user is under 13. But platforms routinely avoid this requirement by not asking users’ ages or letting them enter whatever age they want. Without age verification, there’s no actual knowledge and thus no privacy protections.

The Supreme Court’s reasoning changes this dynamic. Since the court emphasized that children lack the same constitutional rights as adults regarding certain protections, states may now be able to require age verification before data collection. California’s Age-Appropriate Design Code and similar state privacy laws would gain substantially more regulatory power under this framework.

Meanwhile, social media platforms could face more restrictions. Several states have tried to limit how social media platforms interact with minors. Florida recently banned kids under 14 from having social media accounts entirely, while other states have targeted specific features such as endless scrolling or push notifications designed to keep kids hooked.

The Supreme Court’s reasoning could protect laws that require age verification before kids can use certain platform features, such as direct messaging with strangers or livestreaming. However, laws that try to block kids from seeing general social media content would still face tough legal challenges, since that content is typically protected speech for everyone.

The decision also supports state laws regulating how minors interact with app stores and gaming platforms. Minors generally can’t enter binding contracts without parental consent in the physical world, so states could require the same online. Proposed legislation such as the App Store Accountability Act would require parental approval before kids can download apps or agree to terms of service. States have also considered restrictions on “loot boxes” – digital gambling-like features – and surprise in-app purchases that can result in massive charges to parents.

Since states already require an ID to buy lottery tickets or enter casinos, requiring age verification before kids can spend money on digital gambling mechanics follows the court’s logic.

What comes next?

But this decision doesn’t give states free rein to regulate the internet. The court’s reasoning applies to content that children have no legal right to access in the first place, specifically sexually explicit material. For most online content such as news, educational materials, general entertainment and political discussions, both adults and kids have constitutional rights to access.

Laws trying to age-gate this protected content would still likely face the strict scrutiny’s standard and be struck down, but what online content and experiences underage users are constitutionally entitled to is not settled. Many advocates worry that while the “obscene for minors” standard in this case appears legally narrow, states will try to expand it or use similar reasoning to classify LGBTQ+-related educational content, health resources or community support materials as inherently sexual and inappropriate for minors.

The court also emphasized that even under this more permissive standard, laws still have to be reasonable. Age verification requirements that are overly burdensome, sweep too broadly or create serious privacy problems could still be ruled unconstitutional. The court’s decision in this case gives state lawmakers much more room to effectively regulate how online platforms interact with children, but I believe successful laws will need to be carefully written.

For parents worried about their kids’ online safety, this could mean more tools and protections. For tech companies, it likely means more compliance requirements and age verification systems. And for the broader internet, it represents a significant shift toward treating online spaces more like physical ones, where people have long accepted that some doors require showing ID to enter.

The Conversation

Meg Leta Jones does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Supreme Court upholds childproofing porn sites – https://theconversation.com/supreme-court-upholds-childproofing-porn-sites-260052