Farming is tough even during the best of times. Rising costs and the dangers posed by climate change will only make it even more challenging in the years to come.
That’s where our work comes in. At MacEwan University, through our spin-out company PimaSens, we have developed Agrilo — a low-cost soil testing sensor paired with a smartphone app.
Our goal is simple: give farmers clear, real-time guidance on fertilizer use so they can save money, boost yields and protect the environment.
How the sensor works
Agrilo takes technology we first built in the lab and translates it into an easy-to-use diagnostic tool for the field. Unlike traditional soil testing, which often requires sending samples to a lab and waiting days for results, Agrilo provides answers in minutes.
Farmers collect a small soil sample, react it with a pre-filled solution, place droplets onto a paper-based or vinyl colorimetric sensor, and capture the result using their phone camera. The Agrilo app then interprets the colour change, quantifies nutrient levels, and generates fertilizer recommendations tailored to the field.
Each Agrilo sensor costs about $10 and is designed to detect a specific nutrient or soil property. The full suite includes sensors for: nitrate, phosphate, potassium, pH, sulphur, magnesium, manganese, calcium, boron, iron, natural organic matter, cation exchange capacity and more.
A step-by-step guide to using the Agrilo sensor for real-time soil monitoring. (PimaSens)
Farmers can select the tests most relevant to their crops and soils. These results feed directly into Agrilo’s smartphone app, which analyzes patterns and suggests the most optimal fertilizer adjustments.
This precision is critical. Overuse of fertilizer wastes money and increases greenhouse gases, while underuse limits yields. Getting the balance right improves farm efficiency and protects ecosystems.
● Healthier soil through balanced nutrient application.
● Higher crop yields from optimized fertilizer use.
● Lower costs by reducing waste.
● Reduced environmental harm from nutrient runoff and fertilizer-related emissions.
The research behind the tool
Our sensors and platform have been validated in peer-reviewed research with the Agrilo version simplified for ease of use by farmers. We also hold a provisional patent, with a full filing in progress. This ensures that the innovation is both scientifically sound and protected for scaling.
Agrilo was created to be both affordable and accessible. (Author provided)
Agrilo was created to be both affordable and accessible. Conventional soil testing often costs hundreds of dollars and involves long wait times. Agrilo delivers the same type of data — validated against results from traditional labs — at a fraction of the cost and in real time.
This opens up opportunities not just for Canadian farmers but also for communities worldwide, including schools and small scale farmers in the Global South.
One of the most exciting aspects of Agrilo is its versatility. Beyond the farm, Agrilo doubles as an education platform. In classrooms, students can learn hands-on how soil nutrients affect crops, food security and ecosystems.
Using the same colorimetric sensors as farmers, students can connect textbook science to real-world environmental challenges — making soil chemistry, agriculture and sustainability more tangible.
By making precision agriculture practical and affordable, we can help address these challenges at scale — showcasing how research developed in Canadian labs can benefit farms, classrooms and communities worldwide.
Looking ahead
Our team is continuing to refine Agrilo. We are already testing the platform with farmers and partners in Canada, Kenya, Costa Rica and beyond.
At the same time, we are building partnerships with schools and international organizations to use Agrilo as both a farming tool and a hands-on educational resource. Several high schools in Alberta have started to try out the Agrilo tool to enhance applied science learning.
Ultimately, our vision is to make precision agriculture accessible to everyone — not just large-scale industrial operations. With the right tools, all farmers can play a critical role in feeding the world sustainably, protecting ecosystems and helping their countries meet their climate goals.
Samuel Mugo is a co-founder of PimaSens. He receives funding from the Natural Sciences and Engineering Research Council of Canada.
Mohammed Elmorsy is a co-founder of PimaSens. He has received research funding related to this work through Riipen and Alberta Innovates Summer Research Studentships.
Space-time provides a powerful description of how events happen.( MARIOLA GROBELSKA/Unsplash), CC BY
Whether space-time exists should neither be controversial nor even conceptually challenging, given the definitions of “space-time,” “events” and “instants.” The idea that space-time exists is no more viable than the outdated belief that the celestial sphere exists: both are observer-centred models that are powerful and convenient for describing the world, but neither represents reality itself.
But what would it mean for a world where everything that has ever happened or will happen somehow “exists” now as part of an interwoven fabric?
Events are not locations
It’s easy to imagine past events — like losing a tooth or receiving good news — as existing somewhere. Fictional representations of time travel underscore this: time travellers alter events and disrupt the timeline, as if past and future events were locations one could visit with the right technology.
Philosophers often talk this way too. Eternalism says all events across all time exist. The growing block view suggests the past and present exist while the future will come to be. Presentism says only the present exists, while the past used to exist and the future will when it happens. And general relativity presents a four-dimensional continuum that bends and curves — we tend to imagine that continuum of the events as really existing.
The confusion emerges out of the definition of the word “exist.” With space-time, it’s applied uncritically to a mathematical description of happenings — turning a model into an ontological theory on the nature of being.
Physical theorist Sean Carroll explains presentism and eternalism.
A totality
In physics, space-time is the continuous set of events that happen throughout space and time — from here to the furthest galaxy, from the Big Bang to the far future. It is a four-dimensional map that records and measures where and when everything happens. In physics, an event is an instantaneous occurrence at a specific place and time.
An instant is the three-dimensional collection of spatially separated events that happen “at the same time” (with relativity’s usual caveat that simultaneity depends on one’s relative state of rest).
Space-time is the totality of all events that ever happen.
It’s also our most powerful way of cataloguing the world’s happenings. That cataloguing is indispensable, but the words and concepts we use for it matter.
There are infinitely many points in the three dimensions of space, and at every instant as time passes a unique event occurs at each location.
Positionings throughout time
Physicists describe a car travelling straight at constant speed with a simple space-time diagram: position on one axis, time on the other. Instants stack together to form a two-dimensional space-time. The car’s position is a point within each instant, and those points join to form a worldline — the full record of the car’s position throughout the time interval, whose slope is the car’s speed.
Real motion is far more complex. The car rides along on a rotating Earth orbiting the sun, which orbits the Milky Way as it drifts through the local universe. Plotting the car’s position at every instant ultimately requires four-dimensional space-time.
Space-time is the map of where and when events happen. A worldline is the record of every event that occurs throughout one’s life. The key question is whether the map — or all the events it draws together at once — should be said to exist in the same way that cars, people and the places they go exist.
Objects exist
Consider what “exist” means. Objects, buildings, people, cities, planets, galaxies exist — they are either places or occupy places, enduring there over intervals of time. They persist through changes and can be encountered repeatedly.
Treating occurrences as things that exist smuggles confusion into our language and concepts. When analyzing space-time, do events, instants, worldlines or even space-time as a whole exist in the same sense as places and people? Or is it more accurate to say that events happen in an existing world?
On that view, space-time is the map that records those happenings, allowing us to describe the spatial and temporal relationships between them.
Space-time does not exist
Events do not exist, they happen. Consequently, space-time does not exist. Events happen everywhere throughout the course of existence, and the occurrence of an event is categorically different from the existence of anything — whether object, place or concept.
First, there is no empirical evidence that any past, present or future event “exists” in the way that things in the world around us exist. Verifying the existence of an event as an ongoing object would require something like a time machine to go and observe it now. Even present events cannot be verified as ongoing things that exist.
In contrast, material objects exist. Time-travel paradoxes rest on the false premise that events exist as revisitable locations. Recognizing the categorical difference between occurrence and existence resolves these paradoxes.
Second, this recognition reframes the philosophy of time. Much debate over the past century has treated events as things that exist. Philosophers then focus on their tense properties: is an event past, present or future? Did this one occur earlier or later than that one?
A stencil interpretation of René Magritte’s 1929 painting, ‘La Trahison des images,’ in which the artist points out that the representation of an object is not the object itself. (bixentro/Wikimedia Commons)
These discussions rely on an assumption that events are existent things that bear these properties. From there, it’s a short step to the conclusion that time is unreal or that the passage of time is an illusion, on the identification that the same event can be labelled differently from different standpoints. But the ontological distinction was lost at the start: events don’t exist, they happen. Tense and order are features of how happenings relate within an existing world, not properties of existent objects.
Finally, consider relativity. It is a mathematical theory that describes a four-dimensional space-time continuum, and not a theory about a four-dimensional thing that exists — that, in the course of its own existence, bends and warps due to gravity.
Conceptual clarity
Physics can’t actually describe space-time itself as something that actually exists, nor can it account for any change it might experience as an existing thing.
Space-time provides a powerful description of how events happen: how they are ordered relative to one another, how sequences of events are measured to unfold and how lengths are measured in different reference frames. If we stop saying that events — and space-time — exist, we recover conceptual clarity without sacrificing a single prediction.
Daryl Janzen does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Source: The Conversation – UK – By Ioannis Kokkoris, Professor of Competition Law and Economics, Queen Mary University of London
A US judge recently decided not to break up Google, despite a ruling last year that the company held a monopoly in the online search market. Between Google, Microsoft, Apple, Amazon and Meta, there are more than 45 ongoing antitrust investigations in the EU (the majority under the new EU Digital Markets Act) and in the US.
While the outcome could have been much worse for Google, other rulings and investigations have the potential to cut to the heart of how the big tech companies make money. As such, these antitrust cases can drive real change around how the tech giants do business – with implications both for their competitors and for ordinary users.
Some investigations focus on potential breaches of longstanding competition legislation, such as restricting the ability of software to work with other software, while others address controversies that have emerged only in the last few years.
Previous antitrust cases have been based on decades-old competition legislation, namely the US Sherman Act, passed in 1890, and the EU’s treaty on the functioning of the European Union, the first iteration of which was signed in 1957. More recent cases in the EU have been based on the newer Digital Markets Act.
A quick search shows at least 15 different countries (including individual countries within the EU) where competition authorities have initiated or concluded investigations into Google’s business practices.
When US Judge Amit Mehta decided not to order a break up of Google in August, or to force the company to sell off its internet browser, Google Chrome – which had both been raised as potential outcomes – he instead imposed a number of other commitments on the company.
Hefty fines
In September 2025, the European Commission also imposed a fine of €2.95bn (£2.5 billion) on Google, in relation to its search advertising practices. The commission said that Google favoured its own online display advertising technology services “to the detriment of competing providers”.
In a statement, Google called the fine “unjustified” and said the changes would “hurt thousands of European businesses by making it harder for them to make money”.
These investigations are not limited to the search giant, however. In the last few years, Microsoft, Apple and Meta have also been under investigation by the EU. So how should we interpret this flurry of enforcement against the tech giants and what does the future hold for them?
Competition investigations hit at the core of these companies’ business activities, so they have an extremely high incentive to fight for every conceivable aspect of their business model. Voluntarily giving some parts of their business up would mean foregoing substantial profits.
Companies clearly have to weigh up the potential downsides of compromising over their business approaches against hefty fines and major restrictions over how they operate in particular territories. In the US case involving Google, major changes to the company had been on the table, including a sell-off of the Google Chrome browser. Needless to say, this would have dealt a major blow to the company.
In 2023, the European Commission started an investigation into Microsoft over the company tying its Microsoft Teams software to its Office 365 and Microsoft 365 software suites. The investigation was initiated following a complaint by Slack, which makes software that competes with Teams.
The way this case concluded is one example of how tech companies can mitigate damage to their business. Microsoft presented its own commitments to the European Commission over the Teams investigation.
The tech giant had to amend its original proposal following market testing by the European Commission, but in September, they were accepted by the Commission. The commitments include making available versions of Office 365 and Microsoft 365 without Teams and at a reduced price.
Behaviour change
Where possible, by offering their own commitments, companies can retain a degree of control and, potentially, avoid a fine. Other recent cases show that those fines can be substantial. In April, the Commission fined Apple €500m after it said the company had breached the Digital Markets Act by preventing app developers from steering users to cheaper deals outside the app store.
In July, Apple launched an appeal against the decision, saying that the Commission went “far beyond what the law requires” in the dispute.
The commission has also investigated Meta over the company’s “pay-or-consent” advertising model. Under the model, EU users of Facebook and Instagram had a choice between their personal data gathered from different Meta services being combined for advertising, or paying a monthly subscription for an ad-free service. Finding that the company had breached the Digital Markets Act, the Commission fined Meta €200 million.
The commission says that when it decides that companies are not complying with legislation, it can impose fines up to 10% of the company’s total worldwide turnover. Such fines can go up to 20% in case of repeated infringement.
In cases of continued non-compliance, the commission can oblige tech companies to sell a business or parts of it, or banning them from acquisitions of other companies involved in areas related to their non-compliance.
Such intervention is likely to place boundaries on any big tech company with regard to their business practices towards competitors and users. As discussed, we have already started to see some evidence of this.
Users are now able to use different services from the companies without having to give consent to their data. There will also be changes in how users engage with some of these services. For example, you may not be able to click on a hyperlinked hotel in a map contained in search results in order to go to its booking website.
Reduced linking was carried out in the EU for Google Maps because of perceptions about the company’s dominance in the search market.
But overall, the expectation is that in the not too distant future, big tech will be more constrained in the business models they adopt, especially where they relate to market competition.
Ioannis Kokkoris does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Graduates aren’t guaranteed a job.Nqobile Vundla/Unsplash
Study hard, get your degree, and then step confidently into a stable, well-paid job. That’s long been the assumption about how to secure a livelihood: in neat, predictable stages. But it is increasingly out of touch with reality. Secure jobs are no longer guaranteed after obtaining a tertiary qualification.
Up-to-date and reliable data on graduate unemployment in Africa is hard to come by. A 2014 British Council study estimated that nearly one in four Nigerian graduates (23.1%) were unemployed. In Kenya, the study said, it took graduates an average of five years to secure their first job. In South Africa, graduate unemployment stood at just 5.8% in 2008. By 2023, this had more than doubled to 11.8%. When looking specifically at young graduates aged 20–29 – a useful proxy for those newly entering the job market – the figure is even starker: nearly one in three (30.3%) were unemployed in 2023.
These figures signal a crisis. The mismatch between graduates and opportunities makes it clear that it’s essential to find diverse ways of making a living.
So what do graduates do to generate livelihoods? We recently conducted research tracking more than 500 African tertiary graduates across 21 universities (nine in African countries and 12 in other countries) over five years to provide some answers.
The findings showed that graduates were piecing together livelihoods from multiple sources rather than walking the straight road of a career. Their paths were complex. Only 16% of the total sample moved smoothly from tertiary education into a job and remained in employment over the period of the survey.
Knowing this, universities can help provide graduates with the skills and resources they will need for the real world.
Graduates build portfolios of income
The study showed that African graduates are resourceful in generating livelihoods. From their responses we identified some trends.
First, they do more than one thing. Almost half of the respondents were engaged in more than one activity – for example, working while running a side business or pursuing further studies. A Ugandan graduate explained how he juggled salaried work, family farming projects and continued education.
Second, they make education itself a livelihood. Scholarships, postgraduate degrees and research opportunities provided both income and stability. Others use underemployment (jobs that don’t match their qualification, skills or ambitions) as stepping stones, gaining experience while waiting for better opportunities.
Third, entrepreneurship or self-employment has a role. While only a small minority relied solely on their own businesses, about a fifth of graduates supplemented their income in this way. Some sold goods, others started NGOs or social enterprises, and many saw entrepreneurship as a safety net in an unpredictable labour market.
But this isn’t just about necessity. Graduates are motivated by opportunity, passion projects, and the chance to build something of their own, often with family members. This challenges the common view that entrepreneurship in Africa is driven only by desperation. In reality, necessity and opportunity overlap, and both are part of how graduates make a living.
Beyond ‘waiting’ for an opportunity
The pathways described by graduates don’t fit the conventional picture of being “stuck” or “unemployed”. Instead, they are marked by movement, improvisation and continuous reinvention.
Even when underemployed, graduates often describe their jobs as dignified or at least as stepping stones. They are investing in their futures, sharpening skills and building networks.
This kind of agency (the capacity to navigate uncertainty and imagine alternative futures) is a crucial resource. It allows young Africans to find dignity and purpose in contexts where institutional support and job opportunities are limited.
What universities can do differently to prepare graduates
These findings raise tough questions for universities. If the education-to-employment pipeline is so complex, what role should higher education play in preparing graduates? Our research points to some answers:
First, universities must stop clinging to outdated concepts like “employability”. Degrees are not tickets to stable jobs. Instead, education should prepare students for diversified, non-linear livelihoods. This means teaching not just technical skills but also resilience, adaptability and entrepreneurial thinking.
Entrepreneurship education is one starting point. Courses on business planning, financial management and networking can help graduates who want to start or sustain ventures.
But skills alone are not enough. Without supportive ecosystems, such as incubators, access to finance and mentorship, many small businesses fail. Universities could act as hubs, linking students and graduates to government programmes, private sector partners and alumni networks. Partnerships between universities and government agencies, like South Africa’s National Youth Development Agency which funds business ventures, need to be forged.
Career services also need to evolve. Rather than focusing narrowly on job placements, universities should help students explore multiple career paths, build social capital and access opportunities for income diversification. Practical resources, like co-working spaces, short courses or “micro-credentials” that allow graduates to quickly pick up new skills, and seed funding could give graduates a head start.
Finally, alumni networks are a powerful but underused asset. Showcasing graduates who have successfully diversified their income can inspire others and change the prevailing narrative.
Education should no longer be seen simply as a bridge to wage employment, but as a platform for building flexible, multi-dimensional livelihoods.
A new story of graduate life
The African youth population is still growing, and the labour market will not suddenly expand to meet demand. That reality can sound daunting. But the stories of young graduates also show resilience, creativity and determination. They are not passively “waiting” for jobs – they are actively constructing futures, often against the odds.
Universities and other tertiary education institutions must catch up. By supporting entrepreneurship, fostering networks and recognising the reality of non-linear transitions, they can help graduates navigate uncertainty with confidence.
The future of work in Africa will not be defined by smooth transitions, but by complex entanglements. Recognising and supporting these entanglements may be one of the most important tasks of higher education in the decades ahead.
This article was produced in the context of The Imprint of Education study that was conducted by the Human Sciences Research Council, South Africa between August 2019 and July 2025, in partnership with and funded by the Mastercard Foundation. The views expressed are those of the authors alone and do not necessarily represent those of the Mastercard Foundation, its staff, or its Board of Directors. Andrea Juan holds an honorary research fellowship at the University of KwaZulu-Natal, School of Law.
This article was produced in the context of The Imprint of Education study that was conducted by the Human Sciences Research Council, South Africa between August 2019 and July 2025, in partnership with and funded by the Mastercard Foundation. The views expressed are those of the authors alone and do not necessarily represent those of the Mastercard Foundation, its staff, or its Board of Directors. Adam Cooper holds an honorary research associateship at Nelson Mandela University, Chair in Youth Unemployment, Empowerment and Employability.
Watching the space between two nations shrink became a regular pastime for Detroiters over the past decade as the segments of the Gordie Howe International Bridge gradually grew, extending meter by meter from Ontario on one side and Michigan on the other.
People living close to the existing bridge will gain some relief from truck traffic and pollution. But this burden won’t simply disappear – it will be shifted nearby, where others will have to cope with increased traffic flowing over six lanes 24 hours a day.
In the early days, the debate concentrated on who would own the bridge and who would pay for it.
Once just a concept known by the acronym DRIC, or Detroit River International Crossing, the project became real under former Michigan Gov. Rick Snyder. In July 2018, representatives from both Ottawa and Washington broke ground on the bridge situated in an area of Detroit empty enough to contain its significant footprint and bear its weight without fear of sinkholes from underground salt mines.
The bridge’s designers attempted to honor the cultural and natural history of the region. It was named after the legendary Canadian hockey player who was also a longtime stalwart for the Detroit Red Wings. The bridge’s towers are adorned with murals by First Nations artists.
One bridge was always a bad idea, (nearly) everyone agreed
Residents and politicians have long agreed that having a single, privately owned bridge connecting Detroit and Windsor was a bad idea. This felt especially apparent after the 9/11 terrorist attacks laid bare the possibility of suddenly losing critical infrastructure.
For many years, travelers’ only other connection between Canada and Detroit has been a tunnel that runs underneath the Detroit River. However, the tunnel doesn’t offer direct access to interstate highways, making it less suitable for commercial trucks.
Others have criticized the attempts to compensate the residents of Delray, a once-vibrant neighborhood that has been impacted by industrialization since the 1960s.
Benefits negotiated for residents and homeowners affected by the construction have not increased as the project’s costs ballooned and the timeline to complete it stretched out.
The cost of the Gordie Howe bridge is now estimated at around $6.4 billion Canadian – or about $US4.7 billion. That is $700 million more than the original projected cost. The project is at least 10 months behind schedule.
Materials for an on-ramp construction to the new Gordie Howe International Bridge are stored in a residential neighborhood in Southwest Detroit on Aug. 26, 2025. Valaurian Waller/The Conversation, CC BY-ND
Simone Sagovac, director of the SW Detroit Community Benefit Coalition, said they did not anticipate the immense scale of the development and its continued effects on the community.
Sagovac wrote that the project took 250 homes, 43 businesses and five churches by eminent domain, and “saw the closing of more after.” One hundred families left the neighborhood via a home swap program funded as a result of the benefits agreement administered by a local nonprofit. Two hundred and seventy families remain, but most businesseshave left the area over decades of decline.
The families that remain are often long-term residents wanting or needing a cheap place to live and willing to put up with dust, noise and smells from nearby factories and a sewage treatment plant.
“They constantly face illegal dumping and other unanswered crimes, and will face the worst diesel emissions exposure and other trucking and industry impacts,” Sagovac wrote.
Heather Grondin, chief relations officer of the Windsor-Detroit Bridge Authority, wrote in an emailed statement
that they have taken steps to minimize impacts from construction and that they regularly meet with the community to hear concerns.
“Construction traffic is using designated haul routes to minimize community impacts, traffic congestion and wear and tear on existing infrastructure while maximizing public and construction safety,” Grondin wrote.
According to Grondin, cars will be forced to follow a “no idling” rule on the American side to minimize pollution. Other aspects of the Community Benefits Plan included $20,000 in free repairs for 100 homes, planting hundreds of trees and investing in programs addressing food insecurity and the needs of young people and seniors, Grondin wrote.
It costs $9 to cross the Ambassador Bridge in a car. Tolls on the Gordie Howe bridge (pictured) haven’t been announced yet. Paul Draus, CC BY-ND
An updated Health Impacted Assessment is expected to be released later in 2025.
History lost
Lloyd Baldwin, a historian for the Michigan Department of Transportation, was tasked with evaluating whether local landmarks like the legendary Kovacs Bar needed to be preserved.
“Kovacs Bar was one among many working-class bars in the Delray neighborhood but stands out for its roughly eight-decade association as a gathering place for the neighborhood and downriver Hungarian-American community,” Baldwin wrote in one such report.
This was not MDOT’s only loss. While the agency made some sincere efforts to leverage other benefits for residents who remained, dynamic factors at many levels were out of the agency’s control.
In the period of legal limbo, Baldwin said, “the neighborhood imploded.”
Baldwin gave the example of the Berwalt Manor Apartments, built in the 1920s and located on Campbell Street near the bridge entrance. MDOT committed to preserve the historic building and proposed to mitigate the environmental impacts on mostly low-income residents by paying for new windows and HVAC units once the bridge was built.
But the speed of development outstripped the pace of community compensation. The building passed through probate court in 2018 and has since changed hands multiple times, so it is now unclear whether there are any low-income residents left to benefit from upgrades.
Benefits yet to be measured
On the brighter side, environmentalists have pointed to the expansion and connection of bicycle trails and bird migration corridors as long-term benefits of the Gordie Howe bridge.
On the Canadian side, the bridge construction falls largely outside of Windsor’s residential neighborhoods, so it caused less disruption. As part of the project,bike lanes, enhanced landscaping, and gathering spaces were added to an approach road called Sandwich Street.
Cross-border tourism spurred on by a proposed system of greenways called the “Great Lakes Way” may provide new opportunities for people and money to flow across the Detroit River, improving the quality of life for communities that remain.
Paul Draus is affiliated with the Downriver Delta CDC and Friends of the Rouge. The Fort Street Bridge Park, a project that Draus is affiliated with, received a donation for a public sculpture from the Windsor Detroit Bridge Authority in 2020.
On Sept. 24, 2025, NASA launched two new missions to study the influence of the Sun on the solar system, with further missions scheduled for 2026 and beyond.
I’m an astrophysicist who researches the Sun, which makes me a solar physicist. Solar physics is part of the wider field of heliophysics, which is the study of the Sun and its influence throughout the solar system.
The field investigates the conditions at a wide range of locations on and around the Sun, ranging from its interior, surface and atmosphere, and the constant stream of particles flowing from the Sun – called the the solar wind. It also investigates the interaction between the solar wind and the atmospheres and magnetic fields of planets.
The importance of space weather
Heliophysics intersects heavily with space weather, which is the influence of solar activity on humanity’s technological infrastructure.
This event produced a beautiful light show of the aurora across the world, providing a view of the northern and southern lights to tens of millions of people at lower latitudes for the first time.
However, geomagnetic storms come with a darker side. The same event triggered overheating alarms in power grids around the world, and triggered a loss in satellite navigation that may have cost the U.S. agricultural industry half a billion dollars.
But even those events were small compared with the largest space weather event in recorded history, which took place in September 1859. This event, considered the worst-case scenario for extreme space weather, was called the Carrington Event. The Carrington Event produced widespread aurora, visible even close to the equator, and caused disruption to telegraph machines.
If an event like the Carrington event occurred today, it could cause widespread power outages, losses of satellites, days of grounded flights and more. Because space weather can be so destructive to human infrastructure, scientists want to better understand these events.
The most recent additions to NASA’s collection of heliophysics missions launched on Sept. 24, 2025: Interstellar Mapping and Acceleration Probe, or IMAP, and the Carruthers Geocorona Observatory. Together, these instruments will collect data across a wide range of locations throughout the solar system.
IMAP is en route to a region in space called Lagrange Point 1. This is a location 1% closer to the Sun than Earth, where the balancing gravity of the Earth and Sun allow spacecraft to stay in a stable orbit.
IMAP contains 10 scientific instruments with varying science goals, ranging from measuring the solar wind in real time to improve forecasting of space weather that could arrive at Earth, to mapping the outer boundary between the heliosphere and interstellar space.
IMAP will study the solar wind from a region in space nearer to the Sun where spacecraft can stay in a stable orbit.
This latter goal is unique, something scientists have never attempted before. It will achieve this goal by measuring the origins of energetic neutral atoms, a type of uncharged particle. These particles are produced by plasma, a charged gas of electrons and protons, throughout the heliosphere. By tracking the origins of incoming energetic neutral atoms, IMAP will build a map of the heliosphere.
The Carruthers Geocorona Observatory is heading to the same Lagrange-1 orbit as IMAP, but with a very different science target. Instead of mapping all the way to the very edge of the heliosphere, the Carruthers Geocorona Observatory is observing a different target – Earth’s exosphere. The exosphere is the uppermost layer of Earth’s atmosphere, 375 miles (600 kilometers) above the ground. It borders outer space.
Specifically, the mission will observe ultraviolet light emitted by hydrogen within the exosphere, called the geocorona. The Carruthers Geocorona Observatory has two primary objectives. The first relates directly to space weather.
The observatory will measure how the exosphere – our atmosphere’s first line of defense from the Sun – changes during extreme space weather events. The second objective relates more to Earth sciences: The observatory will measure how water is transported from Earth’s surface up into the exosphere.
The first image of Earth’s outer atmosphere, the geocorona, taken from a telescope designed and built by the late American space physicist and engineer George Carruthers. The telescope took the image while on the Moon during the Apollo 16 mission in 1972. G. Carruthers (NRL) et al./Far UV Camera/NASA/Apollo 16, CC BY
Looking forward
IMAP and the Carruthers Geocorona Observatory are two heliophysics missions researching very different parts of the heliosphere. In the coming years, future NASA missions will launch to measure the object at the center of heliophysics – the Sun.
In 2026, the Sun Coronal Ejection Tracker is planned to launch. It is a small satellite the size of a shoebox – called a CubeSat – with the aim to study how coronal mass ejections change as they travel through the Sun’s atmosphere.
In 2027, NASA plans to launch the much larger Multi-slit Solar Explorer to capture high-resolution measurements of the Sun’s corona using a state-of-the-art instrumentation. This mission will work to understand the origins of solar flares, coronal mass ejections and heating within the Sun’s atmosphere.
It’s well known that learning to play an instrument can offer benefits beyond just musical ability. Indeed, research shows it’s a great activity for the brain – it can enhance our fine motor skills,language acquisition, speech, and memory – and it can even help to keep our brains younger.
After years of working with musicians and witnessing how they persist in musical training despite the pain caused by performing thousands of repetitive movements, I started wondering: if musical training can reshape the brain in so many ways, can it also change the way musicians feel pain, too? This is the question that my colleagues and I set out to answer in our new study.
Scientists already know that pain activates several reactions in our bodies and brains, changing our attention and thoughts, as well as our way of moving and behaving. If you touch a hot pan, for example, pain makes you pull back your hand before you get seriously burned.
Pain also changes our brain activity. Indeed, pain usually reduces activity in the motor cortex, the area of the brain that controls muscles, which helps stop you from overusing an injured body part.
These reactions help to prevent further harm when you’re injured. In this way, pain is a protective signal that helps us in the short term. But if pain continues for a longer time and your brain keeps sending these “don’t move” signals for too long, things can go wrong.
For example, if you sprain your ankle and stop using it for weeks, it can reduce your mobility and disrupt the brain activity in regions related to pain control. And this can increase your suffering and pain levels in the long term.
Research has also found that persistent pain can shrink what’s known as our brain’s “body map” – this is where our brain sends commands for which muscles to move and when – and this shrinking is linked with worse pain.
But while it’s clear that some people experience more pain when their brain maps shrink, not everyone is affected the same way. Some people can better handle pain, and their brains are less sensitive to it. Scientists still don’t fully understand why this happens.
Musicians and pain
In our study, we wanted to look at whether musical training and all the brain changes it creates could influence how musicians feel and deal with pain. To do this, we deliberately induced hand pain over several days in both musicians and non-musicians to see if there was any difference in how they responded to the pain.
To safely mimic muscle pain, we used a compound called nerve growth factor. It’s a protein that normally keeps nerves healthy, but when injected into hand muscles, it makes them ache for several days, especially if you’re moving your hand. But it’s safe, temporary, and doesn’t cause any damage.
Then we used a technique called transcranial magnetic stimulation (TMS) to measure brain activity. TMS sends tiny magnetic pulses into the brain. And we used these signals to create a map of how the brain controls the hand, which we did for each person who took part in the study.
We built these hand maps before the pain injection, and then measured them again two days later and eight days later, to see if pain changed how the brain was working.
Transcranial magnetic stimulation involves sending tiny electrical pulses to the brain. Yiistocking/Shutterstock.com
A striking difference
When we compared the brains of the musicians and the non-musicians, the differences were striking. Even before we induced pain, the musicians showed a more finely tuned hand map in the brain, and the more hours they had spent practising, the more refined this map was found to be.
After pain was induced, the musicians reported experiencing less discomfort overall. And while the hand map in non-musicians’ brains shrank after just two days of pain, the maps in musicians’ brains remained unchanged – amazingly, the more hours they had trained, the less pain they felt.
This was a small study of just 40 people, but the results clearly showed that the musicians’ brains responded differently to pain. Their training seems to have given them a kind of buffer against the usual negative effects, both in how much pain they felt and in how their brain’s motor areas reacted.
Of course, this doesn’t mean music is a cure for chronic pain. But it does show us that long-term training and experience can shape how we perceive pain. This is exciting because it might help us understand why some people are more resilient to pain than others, along with how we can design new treatments for those living with pain.
Our team is now conducting further research on pain to determine if musical training may also protect us from altered attention and cognition during chronic pain. And off the back of this, we hope to be able to design new therapies that “retrain” the brain in people who suffer from persistent pain.
For me, this is the most exciting part: the idea that as a musician, what I learn and practise every day doesn’t just make me better at a skill, but that it can literally rewire my brain in ways that change how I experience the world, even something as fundamental as pain.
This article was commissioned by Videnskab.dk as part of a partnership collaboration with The Conversation. You can read the Danish version of this article, here.
Anna M. Zamorano has received funding from The Lundbeck Foundation and from the Danish National Research Foundation through the Center for Neuroplasticity and Pain (CNAP).
The United States military under the Donald Trump administration has sunk three Venezuelan boats that were allegedly ferrying drugs. American officials branded the people on the boats “narcoterrorists.”
The term “narcoterrorist” conflates the U.S. internal “war on drugs” and external “war on terror” and suggests drug smuggling is punishable by death without trial.
Canada, incidentally, has followed the lead of the U.S. by designating a list of drug cartels as terrorist organizations. This means Canada is now involved in the expansion of violence against people associated with drug smuggling or drug use when they’re labelled terrorists. It also aligns Canada with the American “war on drugs.”
The problem with the language of war
The problem with both terms — the “war on drugs” and the “war on terror” — lies in how they serve to justify killing people. Violence is portrayed as an appropriate response to a threat from an “enemy” rather than an attack on people who may or may not be linked to drugs or terrorism.
The attacks are carried out without the submission of evidence, and it’s almost impossible to verify claims of guilt after the fact.
A brief look at the origins of the U.S. war on drugs shows how the term “war” can be used to normalize acts of oppression or violence.
While fear of drug use predated his presidency, Nixon made the issue a central part of his domestic policy. He framed his efforts as a fight to protect public health and safety as the justification for increasing the scope of police actions against drug sellers and users.
The Shafer Commission, appointed by Nixon, recommended decriminalizing marijuana in 1972, but he ignored its findings and instead enacted more punitive anti-drug legislation.
Portraying drugs and drug users as a threat was a central part of Nixon’s law-and-order campaign. Privately, however, aides later revealed that his drug policy was also used to target political opponents, particularly anti-Vietnam War activists and Black communities — by associating them with drugs and justifying an increase in policing.
Nixon continued drug enforcement in the U.S. as well as through international policies aimed at curbing drug production. His administration’s war on drugs was not just a social order initiative, but a political strategy that weaponized drug policy to consolidate power and marginalize opponents.
The “war on drugs” therefore relied on racist attitudes to justify its heavy enforcement of Black communities.
People, not enemies
By branding the initiative a war on drugs, Nixon turned people addicted to drugs into enemies and implicitly made acceptable levels of oppression that would not be tolerated under normal circumstances.
But drugs are not a force that an army can defeat. The war on drugs has been a failure and become the longest “war” in U.S. history.
The idea of a war on drugs erases people from the equation and dehumanizes them. Similarly, the war on terror emphasized an emotion, namely terror, and used that emotion to justify U.S. military actions abroad, including the ill-fated Iraq War.
The recent attacks on Venezuelan boats alleged to be transporting drugs follow this pattern of justifying acts of violence in the name of combating drugs. Both exploit an understandable fear of drug addiction or of a terrorist attack, and use that emotion to silence criticism of acts of violence as illegal and inhumane.
Decades later, Nixon’s campaign to demonize drugs has now coalesced with the war on terror, even though the term “war” seems inappropriate in both cases.
Invoking war hastens decisions and short-circuits debate, because in a military conflict, decisiveness is crucial to avoid defeat. While initially declaring a war on drugs or terrorists may rally people in the short term, in the long term, it damages both domestic social policy as well as international relations.
Due process
In the recent strikes against Venezuelan ships, the U.S. could have apprehended the boats in international waters and brought the people on board to trial.
This was the procedure during the recent Operation Pacific Viper in the east Pacific, when the U.S. Coast Guard boarded vessels and detained people accused of smuggling cocaine.
The U.S. could have followed the same procedure with the boats from Venezuela, but calling the people on board “narcoterrorists” implicitly justified characterizing them as enemy combatants in the war on drugs.
They were civilian vessels, not part of the Venezuelan military. There may or may not have been drugs on board, and the people may not have been drug smugglers, but the world will never know for certain because the U.S. military killed them and sank their boats.
The language of war in such cases justifies actions made for political motives and undermines the rule of law. Overall it is part of a wider use of the term war by Trump, who recently also seemed to declare war on the city of Chicago.
It’s all of an ongoing weaponization of the term “war” to assert dominance and justify violence, whether internally against American cities or externally against people the government calls “narcoterrorists.”
Martin Danahay does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Hobbits are exceptions to the rule that older ancient humans had proportionally larger wisdom teeth and smaller brains.Jim Watson/AFP via Getty Images
Until Homo floresiensis was discovered, scientists assumed that the evolution of the human lineage was defined by bigger and bigger brains. Via a process called encephalization, human brains evolved to be relatively more massive than would be expected based on corresponding body size.
This proportionally bigger brain is what anthropologists argued enabled us and our relatives to perform more complex tasks such as using fire, forging and wielding tools, making art and domesticating animals.
Exhibit on brain size at the Smithsonian’s National Museum of Natural History in Washington, D.C. Tesla Monson
But these theories had to be thrown out the window when archaeologists announced our fossil cousins Homo floresiensis via scientific publication in 2004. Homo floresiensis lived from about 700,000 to 60,000 years ago in the rainforests of Indonesia, partially contemporaneous with our own species.
Aptly nicknamed Hobbits, Homo floresiensis were short-statured, at just over 3 feet (1 meter) tall, and had a chimp-size brain. This discovery upended the assumption that brains have been increasing in size over the past several million years and generated confusion about what separates recent human relatives in our genus Homo from our more ancient ancestors.
Our previous work on the proportions of molar teeth generated new insights into the evolution of pregnancy by demonstrating that fetal growth rates are tightly linked to molar proportions in primates. Now, we wanted to see whether we could uncover a relationship between tooth proportions and brain size among our fossil relatives.
Paleontologists have only limited skeletal materials, sometimes only a few teeth, for many fossil species, including Homo floresiensis. If tooth proportions can provide information about fossil brain size, it opens up a world of possibilities for assessing past changes in encephalization.
Reconstructing brain size using teeth
We collated data on tooth and brain size for 15 fossil species on the human family tree, spanning about 5 million years of evolution. Somewhat oxymoronically, the third molars – otherwise known as wisdom teeth – have gotten proportionally smaller as brain size has gotten larger throughout human evolution, for most species.
Overall, human relatives with relatively larger wisdom teeth are more ancient and had smaller brains. More recent taxa, like Homo neanderthalensis, had relatively smaller third molars, compared to their other teeth, and larger brains.
This relationship allows researchers to figure out something about brain size for fossils that are incomplete, perhaps existing only as a few lone teeth. Since teeth are predominately made of inorganic matter, they survive in the fossil record much more often than other parts of the body, making up the vast majority of paleontological materials recovered. Being able to know more about brain size from just a few teeth is a truly useful tool.
A replica of LB1, the most complete skeleton of Homo floresiensis, in profile in an exhibit at the Smithsonian’s National Museum of Natural History. Tesla Monson
Scientists recognize now that the formation of the brain and the teeth are inextricably connected during gestation. And for most species, larger brains are correlated with smaller wisdom teeth.
The one exception in genus Homo is Homo floresiensis, the Hobbit. The wisdom teeth of the Hobbits are small proportional to the other molars – the typical pattern for members of genus Homo. But their brains are also small, which is quite unusual.
There are two primary ways for brain size to decrease: by slowing down growth during gestation before birth or by slowing down growth after birth, during childhood. Because teeth develop early in gestation, slowing down growth rates during pregnancy tends to affect tooth shape and size, or even whether the teeth develop at all. Slowing growth later, during childhood, influences skeletal shape and size in other ways, because different parts of the body develop at different times.
Our new research provides evidence that the body size of Homo floresiensis likely shrank from a larger-bodied Homo ancestor by slowing down growth during childhood. The Hobbits’ small wisdom teeth suggest that, at least in utero, they were on track for the proportionally bigger brains that are the trademark of humans and their relatives. Any brake that slowed down brain growth likely occurred after birth.
The small body size of Homo floresiensis was likely an adaptation to the unique conditions of their island environment on Flores.
Evolving small body size as an adaptation to living on an isolated island is known as insular nanism. There are many examples of other mammals becoming small on islands over the past 60 million years. But one of the most relevant examples is the dwarf elephant, Stegodon sondaarii, that lived on Flores and was hunted by H. floresiensis for food.
But people with smaller brains are certainly no less intelligent than people with larger brains. Variation in body size dictates brain size; it is not a measure of cognitive ability. The island Hobbits crafted tools, hunted large-for-them game in the form of pygmy elephants, and likely made and used fire.
Our research supports that their small body size originated from a slowdown in growth during childhood. But this process would likely have had little impact on brain function or cognitive ability. We hypothesize that the Hobbits were small but highly capable.
Exhibit of cranial variation in fossil hominids, with Homo floresiensis in the foreground, at the Smithsonian’s National Museum of Natural History. Tesla Monson
Understanding the evolution of us
New research, including our study, continues to reinforce the importance of understanding how pregnancy and child growth and development evolved. If we want to know what distinguishes humans from our evolutionary ancestors, and how we evolved, we must understand how the earliest moments of life have changed and why.
Our work also encourages the reevaluation of endless attention on increasing brain size as the predominant force in human evolution. Other species in genus Homo had small brains but were likely not much different from us.
Tesla Monson receives funding from the National Science Foundation.
Andrew Weitz does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Colorado’s $3.9 billion wine industry is threatened by a tiny aphid. Courtesy of Charlotte Oliver
Grape phylloxera, or Daktulosphaira vitifoliae, is an aphidlike insect that attacks grapevines with devastating effects. In Colorado, where wine is an estimated US$3.9 billion dollar industry, phylloxera poses a significant threat.
In 2015, several vineyards in the Grand Valley American Viticultural Area on Colorado’s Western Slope observed that groups of their vines were not thriving. The vines were yellowing and producing limited fruit. All the normal issues, such as nutritional deficiencies and irrigation problems, were investigated and nothing turned up.
So, in 2016, two Colorado State University researchers began surveying vineyards in Mesa County and found the industry’s worst nightmare – phylloxera, which had infested the roots in several local vineyards.
The pair expanded their survey and covered more than 350 vineyard acres across Mesa, Delta and Montezuma counties, where wine grapes are grown. Phylloxera was found in both Mesa and Delta counties in 18 vineyards for a total of 34 scattered acres. The phylloxera wasn’t centralized, which made controlling its spread complicated.
As the viticulture extension specialist at Colorado State University, I spend a lot of time helping Coloradans work with their grapevines. In 2024, I took over the research at Colorado State University on the phylloxera infestation that is still active nine years after it was discovered. I plan to continue tracking phylloxera’s spread and training producers on what to do when, not if, this pest appears in their vineyard.
Phylloxera have two discrete forms during their life cycle: an above-ground, winged form called alates and a below-ground, wingless, root-feeding form called radicoles.
The above-ground form of the insect causes galls, which are lumpy swellings on the grape leaves, that generally have limited impact on the vine’s health.
However, the below-ground insects that feed on the vine’s roots cause severe damage. The roots are the vine’s main tool for foraging for nutrients and water, so when they are compromised the vine starts to suffer and eventually dies four to seven years after being infected. Sometimes phylloxera’s infestation looks like a lack of nutrition or water, so knowing what is wrong with the vines can require a lot of searching and testing. The real issue is that as the vine declines, it also stops producing a commercially viable quantity of fruit, which is 2.5-4 tons per acre. No fruit means no wine, so the vines have to be removed.
Managing the ‘wine blight’
Grapevines, like a lot of other perennial crops, including peaches and apples, are usually grafted, which is when the top of one plant and the roots of another plant are combined to make one continuous plant. Usually, the grafting is to change something about the physicality of the plant, such as making it bigger or smaller. With grapes, the main reason to graft is to protect the vine from damage from a single pest – grape phylloxera.
There are alternatives such as insecticides, but they are Band-Aids, because insecticides suppress only the pest. Grape roots, and the phylloxera on them, can go far deeper into the soil than the soil insecticide treatments, and some of the treatments to the leaves can be damaging to honeybees.
Another potential solution is using modern varieties of grapes that were bred by crossing native North American grapes with the standard European wine grape. By adding in North American grapes, the new varieties are more tolerant of the phylloxera and can handle some damage. The issue with these varieties, such as Chambourcin and Aromella, is that consumers have never heard of them, which is a major factor when people buy wine. These varieties also have more Concord grape flavors that are less common to wine drinkers outside of the East Coast of the U.S.
Globe-trotting pests
A video from TerraVox Winery in Missouri about the history of phylloxera.
Phylloxera is native to the East Coast and Midwestern United States, but now it’s found in all grape-growing regions worldwide.
In the late 1800s, phylloxera started making its way around the world. Phylloxera can fly when it is in its above-ground life cycle, but the below-ground insects can be spread any time soil is moved. It hitched a ride to France in the early 1860s on North American grape plants that were being imported to help with powdery mildew, which is the most economically damaging fungal disease of grapes. Powdery mildew, or Erysiphe necator, can attack all growing tissues of a vine, including blooms, leaves and shoots, and control can account for approximately 37% of gross grape production cost.
Over the course of three decades after phylloxera was introduced to France, it caused approximately 2.5 million acres of vines to be replaced.
Once it crossed the Atlantic, phylloxera was flying over borders and road-tripping from vineyard to vineyard on workers’ shoes and tractors. By the beginning of the 1900s, phylloxera had spread throughout France, Portugal, Germany and then Spain and established itself as a permanent problem. The European varieties of grapes had no resistance to phylloxera, and the insects found ways to evade chemical management, making yearly insecticide applications a necessity.
Phylloxera attacks the roots of grapevines. Courtesy of Charlotte Oliver
While phylloxera is native to the U.S., until recently, there were several states that did not have it, including Colorado, Oregon and Washington.
Colorado’s 50-year-old wine grape industry benefited from the absence of phylloxera. Vineyards owners planted mostly self-rooted European variety vines starting around the mid-1980s. Self-rooted vines are not grafted, which means that a Chardonnay vine was Chardonnay both above and below ground.
Currently, there are areas in multiple countries such as Australia and China, and certain states such as Washington, where phylloxera is present but well contained through quarantine. However, there are concerns for the future. Computer modeling has offered ideas about the future expansion of phylloxera’s survival range on both a regional and global scale.
What does the future hold?
In a 2019 survey by the same research team, more phylloxera was found in vineyards in Delta County as well as a new vineyard outside of Denver. Currently, many Colorado vineyards are in the middle of overhauling their vineyards. The spread of phylloxera as well as the frequency of freezes has led to extensive death in vineyards across the Western Slope.
The percentage of acreage planted with nongrafted, non-European grapes has increased to 25% in recent years, according to results from statewide surveys. Approximately 30% of the replaced vineyards were with new modern varieties, and the rest were replaced with grafted European varieties.
While the European grafted vine may provide a higher-priced fruit due to market demand, the modern varieties have a lot of appeal. Since the vines are more phylloxera tolerant, they do not have to be grafted, so they can easily be recovered when a harsh fall freeze happens.
While the Colorado wine industry has accepted that phylloxera is here to stay, expanded surveys are needed to better understand how far this pest has spread, especially in the more isolated areas of the West Elks American Viticultural Area, in Delta County, the southwest corner of Colorado, in Montezuma County, and in Fremont and Pueblo counties.
Additionally, future phylloxera spread may be better estimated by studying soil texture and temperature, which has been done in models created by Washington State University. Phylloxera may be less likely to survive in certain areas of Colorado. If those areas are also suitable for grape production, it could help direct the locations of future plantings, especially of European varieties.
Charlotte Oliver does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.