The bladder is easy to overlook – until it starts causing trouble. This small, balloon-like organ in the lower urinary tract quietly stores and releases urine, helping the body eliminate waste and maintain fluid balance.
But just like your heart or lungs, your bladder needs care. Neglect it and you risk discomfort, urinary tract infections and, in some cases, serious conditions such as incontinence (involuntary leakage of urine) or even cancer.
The good news: many bladder problems are preventable and linked to everyday habits. Here are six common habits that can sabotage bladder health.
1. Holding in urine too long
Delaying a bathroom visit allows urine to build up and stretches the bladder muscles. Over time this can weaken their ability to contract and empty the bladder completely, leading to urinary retention. Research shows that holding urine gives bacteria more time to multiply, raising the risk of urinary tract infections (UTIs).
Experts recommend emptying your bladder every three to four hours. In severe cases, chronic retention can even damage the kidneys. When you do go, relax – women in particular should sit fully on the toilet seat rather than hovering, so the pelvic muscles can release. Take your time and consider double voiding: after you finish, wait 10–20 seconds and try again to ensure the bladder is fully emptied.
2. Not drinking enough water
Dehydration makes urine more concentrated, which irritates the bladder lining and increases infection risk. Aim to drink six-to-eight glasses of water (about 1.5 to 2 litres) a day, more if you’re very active or in hot weather. If you have kidney or liver disease, check with your doctor first.
Too little fluid can also lead to constipation. Hard stools press on the bladder and pelvic floor, making bladder control harder.
3. Too much caffeine and alcohol
Caffeine and alcohol can irritate the bladder and act as mild diuretics, increasing urine production. A study found that people consuming over 450mg of caffeine per day – roughly four cups of coffee – were more likely to experience incontinence than those drinking less than 150mg.
Another study showed men who drank six-to-ten alcoholic drinks per week were more likely to develop lower urinary tract symptoms than non-drinkers. Heavy alcohol use may also increase bladder cancer risk, although the evidence is mixed. Cutting back can ease bladder symptoms and reduce long-term risk.
4. Smoking
Smoking is a major cause of bladder cancer, responsible for about half of all cases. Smokers are up to four times more likely to develop the disease than non-smokers, especially if they started young or smoked heavily for years – cigars and pipes included.
Tobacco chemicals enter the bloodstream, are filtered by the kidneys and stored in urine. When urine sits in the bladder, these carcinogens, including arylamines, can damage the bladder lining.
5. Poor bathroom hygiene
Improper hygiene can introduce bacteria into the urinary tract. Wiping from back to front, using harsh soaps or neglecting hand-washing can all upset the body’s natural microbiome and increase UTI risk.
What you eat and how active you are affects your bladder more than you might expect. Excess weight puts pressure on the bladder and increases the likelihood of leakage. Regular exercise helps maintain a healthy weight and prevents constipation, which otherwise presses on the bladder.
Certain foods and drinks – including fizzy drinks, spicy meals, citrus fruits and artificial sweeteners – can irritate the bladder and worsen symptoms for those already prone to problems. Aim for a fibre-rich diet with plenty of whole grains, fruit and vegetables to protect both digestive and bladder health.
Bladder health is shaped by everyday choices. Staying well-hydrated, avoiding irritants, practising good hygiene and listening to your body can all help prevent long-term problems. If you notice persistent changes such as frequent urination, difficulty emptying the bladder, pain or burning when you pee, cloudy or smelly urine, or any sign of blood, see a healthcare professional. Your bladder will thank you.
Dipa Kamdar does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Rachel Moss, Professor in the History of Art and Architecture, Trinity College Dublin
Writing in the early 20th century, the celebrated author James Joyce noted that the Book of Kells – an illuminated manuscript depicting the four gospels of the New Testament in Latin – was “the most purely Irish thing we have”.
By this time, the unique and intricate designs of the approximately 1,200-year-old manuscript were instantly recognisable, having been replicated on everything from embroidered clothing to tea sets coveted by nationalists and the Irish diaspora alike. These designs were deemed symbolic of “pure” Irish visual identity, created before the coming of the Vikings and the Anglo-Normans to Irish shores.
For well over a century, debate has raged as to whether the manuscript was made at Iona on the west coast of Scotland, the northern English monastery of Lindisfarne or indeed a different Columban monastery in Ireland. Now, a new contribution to the debate, The Book of Kells Unlocking the Enigma, soon to be published by archaeologist and art historian Victoria Whitworth, adds further food for thought on the topic.
The manuscript known as the Book of Kells was first referred to as such by the great biblical scholar Bishop James Ussher (1581-1656) to distinguish between two “gospel books of [St] Columcill”, one kept at Kells, county Meath, the other at Durrow in county Offaly.
Land charters transcribed on to the pages of the Kells manuscript prove that it had been there since at least the 11th century, and is therefore likely to be the same “great gospel book of Columcille” recorded as having been stolen and subsequently recovered from the same monastery in 1007.
Although nobody knows exactly when it was made, art historians and paleographers (experts in handwriting and manuscripts) agree that the Book of Kells most likely dates to the late 8th century. And therein lies a problem. The monastery at Kells was not founded until 807, when monks fleeing Viking incursions on the Scottish Hebridean island of Iona were gifted a safer inland site in Ireland to establish a new, ultimately thriving, monastery. So, while we know the manuscript spent at least 650 years at Kells, we do not know where it started its life.
Uncovering new evidence
Between 1994 and 2007, an archaeological excavation at the Pictish monastic site of Portmahomack, Easter Ross in the north-east of Scotland revealed the first-known evidence for the widescale manufacture of parchment in northern Europe.
This was particularly surprising, as no surviving manuscript has previously been identified as coming from this area. In addition to this, Whitworth has identified Pictish stones carved with designs and writing like that found in the Book of Kells. So, does this mean that the most purely Irish thing we have is actually Pictish?
The manuscript was made at a time when Irish churchmen and scholars not only travelled extensively but welcomed people from across Europe to study in its schools. Books also circulated widely at this time, whether as working texts, diplomatic gifts or exemplars distributed to scriptoria (monastery rooms where manuscripts were copied) across Europe.
This cultural mix is evident in the significant range of artistic sources drawn on by the Book of Kells artists. Clearly they had access to designs from contemporary continental gospels, Irish fine metalwork, Byzantine icons and imagery found on Pictish stones. None of the scribes or artists recorded their names, and indeed we don’t even know how many there were, such is the relative consistency of the script.
Non-invasive pigment analysis of the manuscript some years ago revealed the use of pigments typical to manuscript production in Scotland and Ireland during the period, some cleverly blended in such a way as to mimic the precious gold and lapis lazuli that lay beyond their reach.
An estimated 159 calf skins were used to make its surviving pages, some of which were of very poor quality. What we don’t know is whether these animals were reared and processed close to the scriptorium where the manuscript was made, whether they might have been collected up from across the territory of a wealthy donor, or whether they were brought in from a single specialist “processor”, as for example, at Portmahamock. Ultimately, advances in non-invasive DNA testing may provide scientific answers to these questions and reveal much regarding the economy of the period.
While at present it is impossible to prove beyond doubt, Whitworth’s book highlights an important new potential provenance for the Book of Kells. However, it also serves as a timely reminder that our preoccupation with the “nationality” of the manuscript is based on a 19th-century construct, which can distract from other considerations.
Whether based in Pictland, Iona or Ireland, its makers may have come together from a variety of locations, and they certainly had an international outlook. As such, this new research is equally important in considering how these people went about creating an object without borders. In this they were successful, as in 1007 it was deemed “chief relic of the Western World” and two centuries later as “the Work of Angels”.
This article features references to books that have been included for editorial reasons, and may contain links to bookshop.org; if you click on one of the links and go on to buy something from this website The Conversation UK may earn a commission.
Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.
Rachel Moss works for Trinity College Dublin. In the past she has received funding from the Irish Research Council and Bank of America Merrill Lynch for research work relevant to this article.
As a professor of pre-Raphaelite studies, I was excited to see that the track list for Taylor Swift’s 12th album, The Life of a Showgirl includes a song called The Fate of Ophelia. Ahead of the album’s release, fans and art historians speculated that the inspiration could come from John Everett Millais’s painting Ophelia (1851-52), one of the most visited paintings at Tate Britain.
The painting shows Ophelia, the heroine of Shakespeare’s play Hamlet (1623), floating in the river after her doomed relationship had driven her to madness and suicide.
The cover art for The Life of a Showgirl confirms this. It shows Swift wearing a silvery outfit, partially submerged in water with her hands floating palm-up to the surface. So far, so Everett Millais. Though the styling is very different from Millais’s work, the pose, with focus on her hands and face, seemed to give a nod to his Ophelia.
Ophelia by John Everett Millais (1851-52). Tate Britain
The model who posed for Millais’s painting, Elizabeth Siddal (who is buried in Highgate, near where Swift once lived) lay in a bathtub while he painted her. When the candles heating the water went out, she stayed there for hours, uncomplaining, until she became ill.
As a muse and model, Siddal exemplifies the woman silenced by and sacrificed to male artistic ambition. Swift’s cover transforms the corpse-like Ophelia into a striking image, with her eyes open and staring, as though a dead woman has come back to life to accuse us. The female figure is no longer a muse (or a showgirl).
Siddal was interested in similar themes. In her poem My Lady’s Soul, she wrote about a woman transformed through death into art:
Low sit I down at my lady’s feet
Gazing through her wild eyes,
Smiling to think how my love will fleet
When their starlike beauty dies
The woman in the poem may be silenced, but her eyes accuse us of objectification.
The song’s conceit is that a happy relationship has saved the singer from Ophelia’s fate of madness and drowning. Swift has said told interviewers that she prefers a happy ending, having rewritten Romeo and Juliet in Love Story (2008), and The Fate of Ophelia is quite detailed in its references to Hamlet: “The eldest daughter of a nobleman / Ophelia lived in fantasy / But love was a cold bed full of scorpions / The venom stole her sanity.”
The Fate of Ophelia is the first track on Swift’s new album, The Life of a Showgirl.
The chorus goes: “All that time / I sat alone in my tower / You were just honing your powers / Now I can see it all. / Late one night / You dug me out of my grave and / Saved my heart from the fate of Ophelia.”
Alone in a tower, waiting for a prince to come? That sounds like some other Shakespearean or pre-Raphaelite heroines, such as Mariana, from Shakespeare’s play Measure for Measure (1604) who was reinterpreted by the poet Alfred Tennyson in 1832. Millais painted Mariana in 1851. The speaker in Swift’s song, however, has been saved from death: “You dug me out of my grave.”
There are all kinds of interesting resurrection metaphors in the song: was Swift already dead, then? Is this about Ophelia, buried with partial rites due to the suspicion that she killed herself? Or is this about Siddal, the muse and model whose body was exhumed by her husband?
When Siddal died by overdose of laudanum (an opiate many Victorians were addicted to since it was prescribed for many illnesses) in 1862, she was buried at Highgate cemetery. Her grief-stricken (or guilt-ridden) husband Dante Gabriel Rossetti threw his manuscript poems into her coffin. Seven years later, the coffin was exhumed in order to restore the manuscripts to Rossetti for publication.
The myths exploded from that point. Charles Augustus Howell, the unscrupulous friend of Rossetti who oversaw the exhumation, claimed that Siddel’s body was perfectly preserved, her hair had continued growing in her coffin and – as Rossetti wrote to Swinburne in a letter dated October 16 1869 – he believed that “could she have opened the grave, no other hand would have been needed”.
In her reworking of the Ophelia and Siddal story, Swift undermines the stereotype of the mute, decorative showgirl by overlaying it with her own more triumphant ending.
In isolation, Swift’s conflation of Siddal, Ophelia and her own persona isn’t necessarily that progressive: after all, the song features a woman waiting for someone to save her. However, taken in conjunction with the rest of the album, it’s clear that Swift’s approach is to explore the public face of women, from Elizabeth Taylor (reminiscent of Clara Bow from her last album) to Eldest Daughter, and culminating in the title track, which indicates the pain behind the facade of a public figure, “hidden by the lipstick and lace”.
Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.
Serena Trowbridge does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
John le Carré was a master of the spy novel – not by glamorising espionage, but by stripping it of illusion. His stories abandoned the trope of the suave, heartless agent in favour of morally complex characters navigating the shadowy ethics of Cold War intelligence. Gritty, ambiguous and deeply human, his thrillers elevated the spy genre to literary art.
Much of that authenticity came from le Carré’s own experience in British counterintelligence with MI5. But as a new exhibition at Oxford’s Bodleian libraries reveals, his success was just as rooted in painstaking research, interviews and relentless editing.
John le Carré: Tradecraft offers a rare look into the creative process behind nine of his novels. On display are early character sketches, field notes, photographs, handwritten drafts and personal correspondence – many shown publicly for the first time.
Though some critics accused le Carré of becoming too political in his later years, the exhibition suggests that conscience was always central to his work. He consistently interrogated the global systems that enable corruption, reward self-interest and erode the freedoms promised by democratic societies.
As co-curator Jessica Douthwaite writes, this exhibition exposes “a worldview borne out in the idiosyncrasies of his factual research, acute observations, obsession with accuracy, compulsion to travel, and interest in the humans behind the news events”.
At London’s National Theatre, newly appointed director Indhu Rubasingham launches her tenure with a daring production: Nima Taleghani’s radical reimagining of Euripides’s Bacchae.
The ancient tragedy centres on King Pentheus of Thebes, who is punished by his cousin Dionysus (god of wine, ritual madness and theatre) for denying his divine status. In vengeance, Dionysus drives the women of Thebes, including Pentheus’s own mother, into ecstatic madness. They flee to the mountains to join Dionysus’s followers, the Bacchae, and chaos unfolds as Pentheus attempts to bring them back.
As performing arts critic Will Shüler observes, Greek tragedies have always been a mirror of their times – and this adaptation is no exception. Taleghani weaves in themes of decolonisation, feminism, race, LGBTQ+ identity and war, giving this ancient myth a modern political pulse. While occasionally heavy handed, it’s a bold, imaginative and thought-provoking debut for Rubasingham’s directorship.
Bacchae is at the National Theatre until November 1 2025.
Few historical figures have become so synonymous with Dionysian opulence and excess quite like France’s last queen, Marie Antoinette. Branded “Madame Déficit” and vilified for her extravagant lifestyle, she met a violent end during the French Revolution.
Yet modern research has revealed that much of this reputation was unfairly earned. Still, the myth endures.
A new exhibition at the V&A South Kensington, Marie Antoinette Style, aims to unpack that legacy – reframing the queen not as a frivolous spendthrift, but as a complex cultural icon with a keen eye for art and fashion.
“The exhibition confidently places Marie Antoinette not as an exuberant and frivolous monarch, as she is so often seen, but as an intentional, frequently playful, and decidedly modern patron of the arts,” writes reviewer and fashion historian Serena Dyer.
With most of her wardrobe destroyed by revolutionaries, the exhibition turns to creative means: showcasing dresses, furnishings, and glassware inspired by her influence. A few rare personal items do remain – a delicate shoe, fragments of a torn dress – offering glimpses of the refined taste behind the legend.
In Edinburgh’s Inverleigh House in the Royal Botanic Garden you can catch the first retrospective of the trailblazing artist, Linder. Spanning 50 years, Danger Came Smiling connects with its location as it dives into her fascination with plants.
The photomontages on show remix images from popular culture, ranging from early pin-up photography to house plants, to invite onlookers to challenge societal norms around gender and sexuality. It is a vibrant and transgressive show that is at once joyful and punk, in true Linder style.
Danger Came Smiling is on at Inverleigh House, the Royal Botanic Garden, Edinburgh, until October 19, and then transfers to the Glynn Vivian Art Gallery, Swansea, in November 2025.
With rain and gale-force winds sweeping across much of the UK this weekend, staying in might be your best bet. Why not spend it exploring some of the most iconic presidential appearances and opening monologues in American late-night TV history?
The recent cancellation of Jimmy Kimmel Live! — following controversial remarks by Kimmel that reportedly upset the president — has sparked renewed debate around free speech, state interference and censorship in the US. It’s also drawn global attention to the uniquely American tradition of late-night television.
Mocking presidents has long been a hallmark of the genre. In this piece, media expert Faye Davies traces the evolution of the opening monologue as a platform for social commentary and political satire. Many unforgettable moments are available on YouTube – from Richard Nixon’s appearance on The Tonight Show Starring Johnny Carson to Bill Clinton’s saxophone solo on The Arsenio Hall Show, trying hard to sell his cool factor and win votes.
Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.
Source: The Conversation – UK – By Julian Hargreaves, Senior Lecturer, Department of Sociology and Criminology, City St George’s, University of London
A man believed to be Jihad Al-Shamie, a 35-year-old British citizen born in Syria, has been shot dead by police after launching an attack on a synagogue in Manchester on Yom Kippur, the holiest day in the Jewish calendar. Melvin Cravitz, 66, and Adrian Daulby, 55, died in the attack – one having been accidentally shot by police trying to stop the suspect.
According to BBC News, a member of the public called the police at 9:31am to report the incident. Greater Manchester Police deployed firearms officers to the scene at 9:34am. At 9:38am officers declared “Operation Plato” – a code word used by UK emergency services for a marauding terrorist attacker. At 9:39am, armed counter terrorism police officers, shot and killed Al-Shamie who died at the scene. Counter terrorism police later confirmed the attacked as a “terrorist incident”.
Within hours, it had become clear that many foresaw such an attack. The Financial Times reported comments from Marc Levy, chief executive of the Jewish Representative Council, a body representing Jewish communities in Greater Manchester. Levy described the events as “an inevitability”.
The Board of Deputies of British Jews, a national body representing Jewish communities across the UK, described the attack as “sadly something we feared was coming”.
The Jewish Chronicle, a Jewish interest newspaper, reported that staff at the London Centre for the Study of Contemporary Antisemitism were “shocked but not surprised”.
Recent research by the thinktank Antisemitism Policy Trust analysed demonstrations against the war in Gaza. It found public expressions of anti-Jewish hatred alongside more legitimate pro-Palestinian and anti-Israeli government sentiment, including Arabic chants referencing the massacre of Jews in 628BC.
The Community Security Trust, an organisation serving and protecting Jewish communities, records and reports antisemitic incidents in the UK. In 2023, the CST recorded 4,296 incidents – the largest number in a single year. CST used previous lower annual totals to explain how antisemitism is now fuelled by responses to the October 7 Hamas attacks: 1,684 incidents in 2020, 2,261 in 2021 and 1,662 in 2022.
The CST works carefully to investigate and verify all reports of antisemitism. While their work is entirely robust, it cannot easily reveal whether the dramatic rise in incidents reflects growing antisemitic sentiment, or increases in the reporting of antisemitic incidents to the CST, or both.
According to Home Office figures, religious hate crime against Jewish people more than doubled between the years ending March 2023 to March 2024. In 2022-23, there were 1,543 incidents recorded by the police. In 2023-24, there were 3,282.
While the number of incidents is lower than those against Muslim people – 3,432 in 2022-23 and 3,866 in 2023-24 – Jewish people are more likely to suffer religious hate crime. There were 121 incidents for every 10,000 Jewish in England and Wales compared to 10 incidents for every 10,000 Muslim people.
The same caveats apply here. We cannot know whether these increases represent growing hostility towards Jewish people in the UK or more Jewish people reporting hostility to the police. This issue is further complicated by the fact that police-recorded crime is no longer regarded as meeting the standard required of reliable national statistics due to poorly managed recording practices.
How widespread is antisemitism in the UK?
In 2017, the Institute for Jewish Policy Research (JPR) published what is arguably the most robust mapping of antisemitism in the UK. It estimated the extent of anti-Jewish attitudes using a nationally representative survey.
The JPR found that around 2% of the UK population might be labelled as “hardcore” antisemites and a further 3% as “softer” antisemites on the basis that both groups hold multiple antisemitic ideas. It also found that at least one more antisemitic idea is held by 30% of British society.
It is difficult to say with clarity whether or not antisemitism is rising in the UK, mainly because police statistics are so unreliable. But when terrorist attacks occur, we seek to understand what has happened and reach for robust information. This creates an urgent need for fresh research with better police data and more recent crime data.
Regardless of all this, findings from the JPR show that while strong antisemitism remains relatively uncommon in the UK, the odds of Jewish people encountering neighbours with at least one antisemitic idea remains worryingly high. Small wonder then that so many felt this attack was just a matter of time.
Julian Hargreaves does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
A SEPTA train moves along the Market-Frankford Line in West Philadelphia.AP Photo/Matt Rourke
On April 13, 1967, around 1:30 p.m., Lt. Joseph Larkin of the Philadelphia Police Department’s subway unit visited the Philadelphia High School for Girls to interview the school’s librarian, 61-year-old Miriam S. Axelrod.
Axelrod had written a letter to Mayor James H.J. Tate about poor conditions on Philadelphia’s Broad Street Line subway. In her letter, she stated that the escalators in the subway concourse of the Walnut-Locust station were out of operation for several weeks and requested that they “be put in running order.”
Axelrod also asked that “something be done” about people using the subway stairs “as a latrine.”
As a historian of post-1968 Philadelphia, a proud alumna of Girls’ High and a rider of Philadelphia’s mass transit, Southeastern Pennsylvania Transportation Authority – more commonly known as SEPTA – I was thrilled to find Axelrod’s story among 1960s administrative reports to the police commissioner in the city archives.
Axelrod’s story reminds us that for nearly a century, Philadelphia’s mass transit has been plagued by poor conditions and unstable funding. Commuters’ complaints have often convinced government officials to act. However, no effective plan has ever been implemented to definitively solve the city’s transit crises.
SEPTA’s current turmoil
On Sept. 15, 2025, SEPTA fully restored its service, by court order, after implementing 20% service reductions and a 21.5% fare increase due to a US$213 million budget deficit.
Passengers at Olney Transportation Center in North Philadelphia board a SEPTA bus on Aug. 25, 2025, a day after major service cuts went into effect. AP Photo/Matt Rourke
Some lawmakers have argued that SEPTA is guilty of mismanaging funds, since the agency already received over $1 billion in state subsidies last year for operating assistance and asset improvement.
Public transport in the 1920s
As a longtime Philadelphian who lived in Center City, Miriam Axelrod was familiar with the strengths and shortcomings of public transportation.
At the age of 4, her family emigrated from Russia to Carmel, New Jersey. By 1920, they made Philadelphia their home just as the city’s Russian community became the largest immigrant group due to Jewish people escaping pogroms in Europe. Axelrod grew up living in South and North Philadelphia.
At that time, dozens of private transit companies operated in Philly.
Southern Penn operated city buses. Red Arrow provided suburban trolley service. Pennsylvania and Reading railroads offered high-speed rail lines. The Philadelphia Rapid Transit Co. alone brokered deals with 64 underlying companies to annually rent their services under 999-year leases. Fiscal responsibility for quality transportation was complicated and often dependent on public funding.
During PRT’s early years, it paid the city $15,000 annually for snow removal. In return, the city spent $2 million for street paving and bridge repairs.
That was the same year Axelrod graduated from William Penn High School for Girls. Her classmates keenly noted in their yearbook that she had a “remarkable capacity for starting arguments” in which “any debatable subject will do.”
Six years later, the first segment of the Broad Street Subway traveling from Olney Station in North Philadelphia to City Hall opened to the public. Unlike the bus, trolley and railways systems, the city owned the El, short for elevated, line and subway. The city leased both systems to PRT and made the transit company responsible for their maintenance.
On Jan. 1, 1940, the Philadelphia Transportation Co., a private company with a 21-member board of directors that included five city representatives including the mayor, Robert E. Lamberton, merged the transit companies and took over PRT’s operations. PTC became responsible for 10,000 employees and providing transportation for 2 million passengers a day.
PTC also acquired extensive financial responsibilities. Payroll expenses cost $327,000 each week. The annual rate for leasing the subway and El was roughly $3 million. PTC had to provide its 25,000 bondholders an annual income of at least $959,207 while also fulfilling its promise to offer modern transit vehicles.
Overcrowding and frequent fare increases
During the 1940s through the 1960s, Axelrod took public transportation to her job as a librarian at Central High School and later Frankford High School.
Meanwhile, PTC made good on its promise to provide better transit service. In its first eight years of operation, PTC spent $22.8 million to purchase 1,506 new streetcars, buses and trackless trolleys while also improving terminal and plant facilities. The company even purchased advertisements in The Philadelphia Inquirer to highlight its achievements. PTC extended 38 existing routes and created 18 new routes that serviced old residential and industrial areas, along with newly developed neighborhoods.
By 1949, however, many of PTC’s 3.2 million daily riders were complaining about overcrowded subways, the end of free exchanges between popular routes and frequent fare increases.
Passengers ride a subway car in Philadelphia on Feb. 15, 1946. AP Photo
Both PTC and the city faced scrutiny for these issues, although each party had distinct transit obligations outlined in their joint contract. PTC had to provide “safe and adequate service” that included spending on maintenance and replacement of transit equipment. The city was responsible for police and fire services on mass transit along with auditing PTC’s records. Both parties had to agree on fare changes under the state Public Utility Commission’s supervision.
Nevertheless, when issues on mass transit occurred, the city could persuade PTC to improve conditions, but the city was only required to offer emergency services to commuters.
When Larkin personally addressed Axelrod’s 1967 complaint about the subway, he informed her that the United Elevator Co. was repairing the escalators. He also assured her that the subway unit arrested 45 to 50 intoxicated people each month because they were at risk of falling onto the subway tracks. In “isolated cases,” Larkin explained, police arrested people for public urination and defecation.
Larkin reassured Axelrod that PTC could keep subway conditions clean and under control. In reality, PTC was underwater in responsibilities and debt.
On Sept. 30, 1968, SEPTA, a state agency formed five years earlier, took over PTC and managed transportation for the city and its surrounding areas. SEPTA bought PTC for approximately $47.9 million, settling the company’s debt, accepting its pension liability and buying out the institution’s roughly 1.7 million shareholders. Now federal and state funding rather than fare revenue largely determined the quality of the city’s public transit.
Five counties in Greater Philadelphia contribute subsidies to SEPTA in exchange for transit service. Philadelphia alone contributes $110 million. State subsidies also help finance SEPTA’s $1.74 billion operating budget, while federal subsidies support SEPTA’s $1 billion capital budget to pay for major repairs and new equipment. State politicians annually vote on funding for SEPTA, but there has not been a concrete solution to the funding crisis.
However, Philadelphians never ceased to demand better transit service. During the 1980s, the Pennsylvania Public Interest Coalition established the Transit Riders Action Campaign, also known as TRAC, which advocated that SEPTA have better safety, funding, accountability, service and stable fares. The Transport Workers Union Local 234 advised TRAC, while several organizations partnered with them: the Action Alliance of Senior Citizens, the Clean Air Council, Disabled in Action and the Delaware Valley Interfaith Coalition.
Even today, local groups such as Save the Train with outspoken commuters – like Axelrod was in her day – have launched campaigns to halt service cutbacks and encourage residents to write and telephone legislators who can vote to fund SEPTA. Residents have consistently united to advocate for quality mass transit. All that remains is an agreement among lawmakers to make it possible.
Menika Dirkson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Peu d’idées dans la science moderne ont autant bouleversé notre compréhension de la réalité que l’espace-temps, cette trame entrelacée d’espace et de temps qui est au cœur de la théorie de la relativité d’Albert Einstein. L’espace-temps est souvent décrit comme le « tissu de la réalité ».
Le philosophe austro-britannique Ludwig Wittgenstein a un jour averti que les problèmes philosophiques surgissent lorsque « le langage part en vacances ». Il s’avère que la physique en est peut-être un bon exemple.
Au cours du siècle dernier, des mots usuels tels que « temps », « exister » et « intemporel » ont été réutilisés dans des contextes techniques sans que l’on examine le sens qu’ils ont dans le langage courant.
En philosophie de la physique, en particulier dans une approche connue sous le nom d’éternalisme, le mot « intemporel » est utilisé au sens littéral. L’éternalisme est l’idée que le temps ne s’écoule pas et ne passe pas, que tous les événements à travers le temps sont également réels dans une structure à quatre dimensions connue sous le nom d’« univers-bloc ».
L’éternalisme considère que tout existe de manière atemporelle et simultanée. (Rick Rothenberg/Unsplash), CC BY
Selon cette vision, toute l’histoire de l’univers est déjà écrite, de manière intemporelle, dans la structure de l’espace-temps. Dans ce contexte, « intemporel » signifie que l’univers lui-même ne perdure ni ne se déploie dans un sens réel. Il n’y a pas de devenir. Il n’y a pas de changement. Il n’y a qu’un bloc, et toute l’éternité existe de manière intemporelle à l’intérieur de celui-ci.
Mais cela conduit à un problème plus profond. Si tout ce qui se produit à travers l’éternité est également réel, et que tous les événements sont déjà là, que signifie réellement le fait que l’espace-temps existe ?
Un éléphant dans la pièce
Il existe une différence structurelle entre l’existence et l’occurrence. L’une est un mode d’être, l’autre, un mode d’arriver.
Imaginez qu’un éléphant se tienne à côté de vous. Vous diriez probablement : « Cet éléphant existe. » Vous pourriez le décrire comme un objet tridimensionnel, mais surtout, c’est un « objet tridimensionnel qui existe ».
En revanche, imaginez un éléphant purement tridimensionnel qui apparaît dans la pièce pendant un instant : un moment transversal dans la vie d’un éléphant existant, apparaissant et disparaissant comme un fantôme. Cet éléphant n’existe pas vraiment au sens ordinaire du terme. Il se produit. Il apparaît.
Un éléphant existant perdure dans le temps, et l’espace-temps catalogue chaque instant de son existence sous la forme d’une ligne mondiale en quatre dimensions – le parcours d’un objet dans l’espace et le temps tout au long de son existence. L’« éléphant qui apparaît » imaginaire n’est qu’une tranche spatiale du tube, un instant en trois dimensions.
Appliquons maintenant cette distinction à l’espace-temps lui-même. Que signifie l’existence d’un espace-temps à quatre dimensions au sens où l’éléphant existe ? L’espace-temps perdure-t-il dans le même sens ? L’espace-temps a-t-il son propre ensemble de moments « présents » ? Ou bien l’espace-temps – l’ensemble de tous les événements qui se produisent à travers l’éternité – est-il simplement quelque chose qui se produit ? L’espace-temps est-il simplement un cadre descriptif permettant de relier ces événements ?
Déjà des milliers d’abonnés à l’infolettre de La Conversation. Et vous ? Abonnez-vous gratuitement à notre infolettre pour mieux comprendre les grands enjeux contemporains.
L’éternalisme brouille cette distinction. Il traite toute l’éternité – c’est-à-dire tout l’espace-temps – comme une structure existante, et considère le passage du temps comme une illusion. Mais cette illusion est impossible si tout l’espace-temps se produit en un clin d’œil.
Pour retrouver l’illusion que le temps passe dans ce cadre, l’espace-temps à quatre dimensions doit exister d’une manière plus proche de l’éléphant à trois dimensions existant, dont l’existence est décrite par l’espace-temps à une dimension.
Chaque événement
Poussons cette réflexion un peu plus loin.
Si nous imaginons que chaque événement de l’histoire de l’univers « existe » dans l’univers-bloc, alors nous pourrions nous demander : quand le bloc lui-même existe-t-il ? S’il ne se déroule pas et ne change pas, existe-t-il hors du temps ? Si tel est le cas, alors nous ajoutons une autre dimension temporelle à quelque chose qui est censé être intemporel au sens littéral.
Pour donner un sens à cela, nous pourrions construire un cadre à cinq dimensions, en utilisant trois dimensions spatiales et deux dimensions temporelles. Le deuxième axe temporel nous permettrait de dire que l’espace-temps à quatre dimensions existe exactement de la même manière que nous considérons généralement qu’un éléphant dans une pièce existe dans les trois dimensions spatiales qui nous entourent, les événements que nous classons comme espace-temps à quatre dimensions.
À ce stade, nous sortons du cadre de la physique établie, qui décrit l’espace-temps à travers quatre dimensions seulement. Mais cela révèle un problème profond : nous n’avons aucun moyen cohérent de parler de ce que signifie l’existence de l’espace-temps sans réintroduire accidentellement le temps à travers une dimension supplémentaire qui ne fait pas partie de la physique.
C’est comme essayer de décrire une chanson qui existe à un moment donné, sans être jouée, entendue ou dévoilée.
De la physique à la fiction
Cette confusion façonne notre conception du temps dans la fiction et la science populaire.
Dans le film de James Cameron de 1984, The Terminator, tous les événements sont considérés comme fixes. Le voyage dans le temps est possible, mais la chronologie ne peut être modifiée. Tout existe déjà dans un état fixe et intemporel.
Dans le quatrième film de la franchise Avengers, Avengers : Endgame (2019), le voyage dans le temps permet aux personnages de modifier les événements passés et de remodeler la ligne temporelle, suggérant un univers en bloc qui existe et change à la fois.
Ce changement ne peut se produire que si la ligne temporelle à quatre dimensions existe de la même manière que notre monde à trois dimensions.
Mais indépendamment de la possibilité d’un tel changement, les deux scénarios supposent que le passé et l’avenir sont là et prêts à être parcourus. Cependant, aucun des deux ne s’intéresse à la nature de l’existence que cela implique, ni à la manière dont l’espace-temps diffère d’une carte des événements.
Comprendre la réalité
Lorsque les physiciens affirment que l’espace-temps « existe », ils travaillent souvent dans un cadre qui a discrètement brouillé la frontière entre existence et occurrence. Il en résulte un modèle métaphysique qui, au mieux, manque de clarté et, au pire, obscurcit la nature même de la réalité.
Rien de tout cela ne remet en cause la théorie mathématique de la relativité ou la science empirique qui la confirme. Les équations d’Einstein fonctionnent toujours. Mais la manière dont nous interprétons ces équations est importante, en particulier lorsqu’elle influence la façon dont nous parlons de la réalité et dont nous abordons les problèmes plus profonds de la physique.
Définir l’espace-temps est plus qu’un débat technique : il s’agit de déterminer dans quel type de monde nous pensons vivre.
Daryl Janzen ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’a déclaré aucune autre affiliation que son organisme de recherche.
Source: The Conversation – in French – By Fabrice Lollia, Docteur en sciences de l’information et de la communication, chercheur associé laboratoire DICEN Ile de France, Université Gustave Eiffel
Les élections africaines, déjà marquées par des tensions récurrentes autour de la transparence et de la désinformation, pourraient bientôt entrer dans une nouvelle ère, celle des deepfakes. Ces vidéos et audios générés par intelligence artificielle, capables d’imiter la voix, le visage et les gestes d’une personne avec un réalisme troublant, déplacent la frontière de la manipulation politique.
S’ils prêtent parfois à sourire lorsqu’ils mettent en scène des célébrités dans des détournements humoristiques, leur usage en période électorale représente une menace sérieuse pour la stabilité démocratique. Aux États-Unis, en Inde ou encore en Slovaquie, les deepfakes ont déjà été mobilisés pour influencer l’opinion publique. La question centrale est donc très simple : l’Afrique est-elle prête à affronter ce nouvel outil de manipulation électorale ?
Chercheur en sciences de l’information et de la communication, j’étudie la circulation de l’information, la désinformation et les vulnérabilités communicationnelles en contexte de crise. L’émergence des deepfakes illustre ces tensions. En Afrique, où la jeunesse hyperconnectée domine l’électorat mais où la culture numérique reste inégale, le risque est particulièrement élevé. J’en propose ici une lecture info-communicationnelle appliquée aux élections africaines
Des précédents inquiétants à l’échelle mondiale
Les deepfakes ne sont plus une hypothèse futuriste. Ils ont déjà marqué des épisodes électoraux clés, offrant des leçons pour les pays africains.
En 2023, en Slovaquie, quelques jours avant les législatives, un deepfake audio circulant sur Facebook et Telegram attribuait à Michal Simecka, chef du parti pro-occidental progressiste slovaque, une conversation où il planifiait de ruser le scrutin. Ce contenu a semé le doute au profit du camp populiste de Robert Fico. Il s’agit du premier cas documenté en Europe où un deepfake aurait pesé sur un scrutin national.
En 2024 aux États unis lors de la primaire démocrate du New Hampshire, des électeurs reçurent un appel téléphonique deepfake imitant la voix de Joe Biden et incident à l’abstention. Ce qui illustre l’usage des deepfakes pour dissuader la participation électorale, ce qui constitue une attaque frontale contre la démocratie.
En 2024 en Inde, les élections générales de 2024 ont été marquées par une explosion de deepfakes. Ces vidéos et sons générés par intelligence artificielle (IA) ont été diffusés massivement par les réseaux sociaux. Des acteurs de Bollywood ou même des personnalités politiques décédées ont été mis en scène pour soutenir ou attaquer les candidats.
Ces cas montrent que les deepfakes ne visent pas seulement à convaincre, mais surtout à introduire le doute, brouiller les repères et miner la confiance.
• Une faible culture de vérification : beaucoup d’utilisateurs partagent sans contrôler l’origine des contenus;
• Une viralité extrême : les messages et vidéos circulent rapidement dans les groupes fermés et sont difficiles à surveiller;
• Les institutions électorales sont contestées : la confiance citoyenne est fragile, ce qui confère une crédibilité accrue aux fausses informations.
Des signaux faibles apparaissent déjà :
Au Nigéria, en 2023, des inquiétudes ont émergé concernant la circulation de vidéos manipulées lors de la présidentielle.
Au Kenya, en 2022, TikTok et Facebook ont hébergé de nombreux contenus politiques manipulés, certains proches de techniques de falsification, dans le cadre de campagnes de désinformation.
L’Afrique se trouve donc dans une phase de vulnérabilité latente, réunissant tous les ingrédients pour que les deepfakes deviennent rapidement une arme politique.
À la différence des « fake news » classiques, les deepfakes tirent leur force de la synergie image/son créant une illusion sensorielle difficile à contester. Leur efficacité ne repose pas seulement sur la capacité à tromper, mais sur leur pouvoir de déstabilisation symbolique.
Ils peuvent ainsi créer un scandale contre un candidat, amplifier des clivages ethniques ou religieux et semer la confusion.
Cette érosion du contrat de vérité constitue une crise communicationnelle majeure qui fragilise les démocraties africaines déjà confrontées à des équilibres institutionnels précaires.
Une lecture info-communicationnelle
Les SIC permettent d’analyser ce phénomène sous un angle élargi. Trois axes sont particulièrement pertinents :
Tout d’abord, en termes de médiologie et de circulation des rumeurs, les deepfakes s’inscrivent dans une longue histoire des technologies de communication comme instruments de pouvoir. L’incertitude, le manque de transparence et l’opacité de certaines sphères d’information favorisent la prolifération de rumeurs, en particulier dans les contextes électoraux ou politiques.Les deepfakes ajoutent une couche technologique qui donne un vernis de crédibilité à la rumeur.
Ensuite, dans le cadre des logiques sociotechniques des plateformes, les algorithmes comme celui de TikTok privilégient les contenus sensationnels et polarisants. Dans ce système le deepfake devient une arme algorithmique amplifiée par l’économie de l’attention.
Enfin, on constate que dans un contexte africain marqué par des fractures linguistiques, éducatives et technologiques, la réception des deepfakes varie fortement. La culture numérique inégale favorise des appropriations différenciées, accentuant les asymétries de compréhension.
De nombreuses pistent émergent, mais leur mise en œuvre reste complexe :
Google, Meta ou Microsoft développent des outils capables d’identifier les contenus synthétiques. Mais ces technologies de détection restent coûteuses et rarement accessibles aux médias africains.
Des initiatives comme Africa Check jouent un rôle crucial en terme de médias et fact-checking, mais elles sont sous-dimensionnées face à la masse d’informations manipulées.
D’un point de vue juridique, certains pays africains légifèrent contre les fake news comme le Ghana ou l’Ouganda, mais il est à craindre que ces lois, dont l’encadrement est discutable, risquent de servir la censure politique plutôt que la protection citoyenne. Une approche panafricaine via l’Union Africaine ou les communautés régionales offrirait plus de crédibilité.
Former les jeunes et les moins jeunes à repérer, vérifier et questionner les contenus constitue un investissement démocratique stratégique. Les programmes scolaires universitaires, l’éducation aux médias, qui sont les leviers à long terme doivent intégrer la littérature numérique et médiatique comme compétences civiques.
Vers une souveraineté numérique africaine ?
La menace des deepfakes invite aussi à réfléchir à une souveraineté numérique africaine. L’Afrique ne peut pas dépendre uniquement des géants technologiques occidentaux pour sécuriser son espace informationnel. Le développement de laboratoires panafricains de recherche et de détection associés à des initiatives de société civile pourrait constituer une réponse endogène.
En outre, la coopération Sud-Sud ( par exemple entre l’Inde et certains pays africains) pourrait favoriser l’échange de solutions techniques et pédagogiques. Car il ne s’agit pas seulement de contrer la manipulation, mais aussi de bâtir une culture numérique partagée, capable de redonner confiance aux citoyens.
Les cas en Slovaquie, en Inde, aux États-Unis montrent que les deepfakes sont déjà une arme électorale redoutable. En Afrique, leur introduction dans le jeu politique n’est plus qu’une question de temps.
Mais la menace ne se réduit pas à une technologie. Elle révèle une vulnérabilité communicationnelle plus profonde qui se caractérise par une crise de confiance minant la légitimité démocratique. L’enjeu n’est donc pas uniquement de détecter les deepfakes, mais bien de reconstruire un rapport de vérité entre gouvernés et gouvernants.
Former les citoyens, renforcer les médias, développer une recherche locale et promouvoir une régulation panafricaine sont autant de pistes pour affronter ce défi. Car au-delà de la technique, c’est la capacité de l’Afrique à protéger l’intégrité de ses choix démocratiques qui est en jeu.
Fabrice Lollia does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – Africa – By Cormac Price, Post-doctoral fellow the HerpHealth lab, office 218, Building G23. Unit for Environmental Sciences and Management, North-West University; University of KwaZulu-Natal
Black mambas (Dendroaspis polylepis) are Africa’s longest, most famous venomous snakes. Despite their fearsome reputation, these misunderstood snakes are vital players in their ecosystems. They keep rodent populations in check and, in turn, help to protect crops and limit disease spread. The species ranges widely across sub-Saharan Africa, from Senegal to Somalia and south into South Africa. They can adapt to many environments.
Zoologist Cormac Price, in new research with professors Marc Humphries and Graham Alexander and reptile conservationist Nick Evans, found that black mambas can be indicators of heavy metal pollution. We asked him about it.
How do black mambas indicate toxic pollution?
It’s about bio-accumulation. Bioaccumulation happens when chemicals, like pesticides or heavy metals, build up in an organism’s body. These toxins come from polluted environments, from waste products of human activities like manufacturing. They pollute water or soil and gradually accumulate in plants and animals.
If toxins are present in the environment, they may first be taken in by plants, and then by animals that eat the plants, and animals that eat those animals. Black mambas are quite high up the food chain, so a lot of the toxins would accumulate in their bodies. These poisonous substances can reach dangerous levels, causing health problems for whatever eats them.
We tested the presence of four types of heavy metals (arsenic, cadmium, lead and mercury) in the bodies of black mambas.
All our samples were from the eThekwini Municipality (greater Durban area) in South Africa. Durban is a busy shipping container port and has a large industrial sector that includes chemicals, petrochemicals and automotive manufacturing. Alongside all this industry the municipality also has a network of conservancies and green spaces, known as the Durban Metropolitan Open Space System.
We chose to test for these metals because they are widely used in different industries and can cause drastic negative effects in the body. Mercury primarily damages the nervous system, arsenic can cause cancer and skin lesions, cadmium harms kidneys and bones and lead mainly affects brain development and blood functions. Because these metals accumulate over time and are difficult to break down, even low-level exposure can lead to chronic poisoning and long-term health problems.
Black mambas appear to be doing well in Durban and taking advantage of the abundance of rodents, which they eat. Wherever there is human settlement there will be waste and discarded food which rodents take full advantage of. Black mambas can also be quite site-specific when not disturbed, living in the same refuge for many years, giving a clearer indication of pollution levels at that specific site. This makes the snakes potentially good bioindicator species.
A bioindicator species is one that helps us understand the health of an environment. Because they are sensitive to changes like pollution or habitat damage, their presence, absence or condition can reveal if an ecosystem is in good condition or is experiencing increases of pollution or degradation.
The pollutants can be detected and calculated from a non-invasive, harmless scale clipping. Snake scales are composed mostly of keratin, the same sort of protein that produces human hair and nails. To clip a very thin slice of snake scale is as harmless as clipping a human finger nail.
We collected 31 mambas that had already been killed by vehicles, people or dogs, and tested muscle and liver samples from them for toxins. We also took scale clippings from 61 live snakes.
This was the first time in Africa that a species of snake was tested to see if it could be used as an indicator species of heavy metal pollution.
What did you find?
We found that the heavy metal concentrations in scales correlated with those found in the muscle and liver samples. For three of the four metals, scales were as accurate for testing as muscle and liver samples. So the harmless testing method is as good as the more invasive one.
For arsenic, cadmium and lead, the snakes were accumulating significantly lower concentrations of these toxins in the open, natural sites of the Durban Metropolitan Open Space System compared to more industrial and commercial areas. Mercury was less significantly different due to its more volatile nature and its capacity to travel through the environment.
What made you test mamba scales in the first place?
In 2020, I attended a conference on amphibians and reptiles, where a friend of mine presented his work on heavy metal pollutants in tiger snakes in the city of Perth, Australia.
I’ve also been working with Nick Evans of KZN Amphibian & Reptile Conservation for some years, on urban reptile ecology. Nick began collecting scale clippings, and I began to realise, while looking through the literature, how novel this was on a continental scale. Snakes had never been tested as a potential bioindicator species of heavy metal pollution in Africa previously.
Marc Humphries is a professor of environmental chemistry, and I was aware of his work on lead exposure in Nile crocodiles at St Lucia, a wetland in South Africa. When he expressed interest in examining the scale clippings, we were thrilled. Graham Alexander’s expertise in snake behaviour in general and specifically snakes in Durban was also instrumental in the success of this research.
How can this help fight pollution?
The fight against pollution is in the hands of the municipality and city managers. What the snakes are doing is warning us of the increasing danger these pollutants pose to environmental health and ultimately human health. They are also showing us how important open spaces are to the overall environmental and human health of the city of Durban. The snakes are telling us a story; what people in authority decide to do with this story rests with them.
Nick Evans of KZN Amphibian & Reptile Conservation made valuable contributions to the research and was a co-author on the article.
Cormac Price does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Scientists have always needed someone to help foot the bill for their work.
In the 19th century, for example, Charles Darwin made an expensive voyage to the southernmost tip of the Americas, visiting many other places en route, including his famous trek through the Galapagos Islands. The fossil evidence Darwin collected over his five-year journey eventually helped him to think about an infinite variety of species, both past and present.
The HMS Beagle and its crew traversed these places while testing clocks and drawing maps for the Royal Navy, and the voyage was funded by the British government. Darwin’s position as a naturalist aboard the ship was unpaid, but, fortunately, his family’s private assets were enough to cover his living expenses while he focused on his scientific work.
Today, government and private funding both remain important for scientific discoveries and translating knowledge into practical applications.
As a professor of science education, one of my goals while preparing future teachers is to introduce them to the characteristics of scientific knowledge and how it is developed. For decades, there has been a strong consensus in my field that educated citizens also need to know about the nature of the scientific enterprise. This includes understanding who pays for science, which can differ depending on the type of research, and why it matters.
Funding for science is more than just the amount of money. To a large extent, the organizations that fund research set the agenda, and different funders have different priorities. It can also be hard to see the downstream benefits of scientific research, but they typically outweigh the upfront costs.
Basic research leads to new knowledge
Basic research, also called fundamental research, involves systematic study aimed at acquiring new knowledge. Scientists often pursue research that falls into this category without specific applications or commercial objectives in mind.
Of course, it costs money to follow where curiosity leads; scientists need funding to pursue questions about the natural and material world.
About 40% of basic research in the U.S. has been federally funded in recent years. The government makes this investment because basic research is the foundation of long-term innovation, economic growth and societal well-being.
Funding for basic research is distributed by the federal government through several agencies and institutes. For more than a century, the U.S. National Institutes of Health have sponsored a breadth of scientific and health research and education programs. Since 1950, the National Science Foundation has advanced basic research and education programs, including the training of the next generation of scientists.
Other federal agencies have complementary missions, such as the Defense Advanced Research Projects Agency, created in response to the Soviet Union’s launch of Sputnik in 1957. DARPA focuses on technological innovations for national security, many of which have become fixtures of civilian life.
Through a competitive review process at these agencies, subject experts vet research proposals and make funding recommendations. The amount of funding available from the NIH, NSF and DARPA varies annually, depending on congressional appropriations. Most of the awarded funds go to universities, research institutions and other health and science organizations that conduct research. The sum of research dollars awarded differs among states.
Applying research
Scientists undertake basic research to generate new knowledge with no specific end goal in mind. Applied research is different in that it aims to find solutions to real-world problems.
Research that investigates specific, practical objectives or improvements with commercial potential is more likely to attract private investors. Companies directly invest in research and development to gain a competitive edge and turn a profit. Private industry is more likely to sink dollars into applied rather than basic research because the potential payoff in the form of a new product or advance is more visible.
From discovery to real-world implementation
As applied research addresses problems, promising findings are moved toward clinical application or mainstream use. This research and development process can lead to tangible benefits for individuals and society.
According to numbers reported by a coalition of research institutions, every dollar that NIH spends on research leads to $2.56 of new economic activity. For the 2024 fiscal year, this means, of the $47.35 billion Congress appropriated for NIH, the $36.94 billion awarded to U.S. researchers fueled $94 billion in activity through employment and the purchase of research-related goods and services.
Economist Pierre Azoulay and colleagues recently imagined an alternative history where NIH was 40% smaller and dispersed less money – a budget akin to current federal proposals. They argued that more than half of the drugs FDA approved since 2000 are tied to NIH-funded research that would have been cut under this scenario. This thought experiment underscores how valuable those basic research dollars are.
‘Last Week Tonight with John Oliver’ points out some seemingly outlandish basic research that has yielded surprising real-world applications.
Even seemingly out-of-touch or abstract studies may precede discoveries with major impact. Basic research into bee nectar foraging and movement around the colony, recently mentioned on “Last Week Tonight with John Oliver,” led to the development of an algorithm that distributes internet traffic between computer servers, which now powers the multibillion-dollar web-hosting industry. Learning about applications of research with visible societal impacts can help people understand and appreciate the role of funding in the scientific enterprise.
Ryan Summers receives funding from the National Science Foundation (NSF) and the National Institutes of Health (NIH). He is affiliated with the Association for Science Teacher Education (ASTE), NARST, which is a global organization for improving science education through research, and the National Science Teaching Association (NSTA).