EOS

Syndicate content Eos
Science News by AGU
Updated: 17 hours 3 min ago

Magnetic “Switchback” Detected near Earth for First Time

Wed, 10/08/2025 - 13:12
Source: Journal of Geophysical Research: Space Physics

In recent years, NASA’s Parker Solar Probe has given us a close-up look at the Sun. Among the probe’s revelations was the presence of numerous kinks, or “switchbacks,” in magnetic field lines in the Sun’s outer atmosphere. These switchbacks are thought to form when solar magnetic field lines that point in opposite directions break and then snap together, or “reconnect,” in a new arrangement, leaving telltale zigzag kinks in the reconfigured lines.

McDougall and Argall now report observations of a switchback-shaped structure in Earth’s magnetic field, suggesting that switchbacks can also form near planets.

The researchers discovered the switchback while analyzing data from NASA’s Magnetospheric Multiscale mission, which uses four Earth-orbiting satellites to study Earth’s magnetic field. They detected a twisting disturbance in the outer part of Earth’s magnetosphere—the bubble of space surrounding our planet where a cocktail of charged particles known as plasma is pushed and pulled along Earth’s magnetic field lines.

Closer analysis of the disturbance revealed that it consisted of plasma both from inside Earth’s magnetic field and from the Sun. The Sun constantly emits plasma, known as the solar wind, at supersonic speeds in all directions. Most of the solar wind headed toward Earth deflects around our magnetosphere, but a small amount penetrates and mixes with the plasma already within the magnetosphere.

This illustration captures the signature zigzag shape of a solar switchback. Credit: NASA Goddard Space Flight Center/Conceptual Image Lab/Adriana Manrique Gutierrez

The researchers observed that the mixed-plasma structure briefly rotated and then rebounded back to its initial orientation, leaving a zigzag shape that closely resembled the switchbacks seen near the Sun. They concluded that this switchback most likely formed when magnetic field lines carried by the solar wind underwent magnetic reconnection with part of Earth’s magnetic field.

The findings suggest that switchbacks can occur not only close to the Sun, but also where the solar wind collides with a planetary magnetic field. This could have key implications for space weather, as the mixing of solar wind plasma with plasma already present in Earth’s magnetosphere can trigger potentially harmful geomagnetic storms and aurorae.

The study also raises the possibility of getting to know switchbacks better by studying them close to home, without sending probes into the Sun’s corona. (Journal of Geophysical Research: Space Physics, https://doi.org/10.1029/2025JA034180, 2025)

—Sarah Stanley, Science Writer

Citation: Stanley, S. (2025), Magnetic “switchback” detected near Earth for first time, Eos, 106, https://doi.org/10.1029/2025EO250374. Published on 8 October 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

The 17 December 2024 Takhini River landslide and river-ice tsunami, Whitehorse, Yukon, Canada

Wed, 10/08/2025 - 07:23

A major slope collapse in frozen sediments in Canada highlights the role of progressive failure.

Back in January of this year, I posted fascinating a piece by Derek Cronmiller of the Yukon Geological Survey about the 17 December 2024 Takhini River landslide and river-ice tsunami, which occurred in Whitehorse, Yukon, Canada. The location of this landslide is at [60.8611, -135.4180]. As a reminder, this is a figure from his post showing the landslide:-

Surface elevation change detection comparing 2013 lidar DTM to a 2025 DSM created from UAV photos for the Takhini River landslide.

Derek has now published a more detailed article in the journal Landslides (Cronmiller 2025) that provides the definitive description of this event. One element of the article caught my attention. The piece examines in some detail the initiation of the landslide. Cronmiller (2025) observes that:-

“In the case of the 17 December 2024 Takhini landslide, all common triggers are conspicuously absent, and the timing appears to be random.”

The article concludes (rightly in all probability) that the initiating mechanism was progressive failure – i.e. that the slope underwent brittle failure through a tertiary creep mechanism. Under these circumstances no external trigger is needed.

As such, Cronmiller (2025) is much more than a simple (although fascinating) case study. As Derek writes:

“While progressive failure mechanisms are commonly discussed in rockslide and gravitational slope deformation literature, their role in producing landslides in surficial sediments is discussed relatively infrequently as acute triggers commonly mask the effect of this phenomenon’s contribution to slope failure. This case study provides an important example to show that acute triggers are unnecessary to produce landslides in dry brittle surficial sediments.”

I wholeheartedly agree.

Reference

Cronmiller, D. 2025 The 17 December 2024 Takhini River landslide and river-ice tsunami, Whitehorse, Yukon, Canada. Landslides. https://doi.org/10.1007/s10346-025-02622-8

Return to The Landslide Blog homepage Text © 2023. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

New Maps of Natural Radioactivity Reveal Critical Minerals and More

Tue, 10/07/2025 - 13:09

A helicopter flies low over the Appalachian Mountains, moving slowly above mostly forested lands of Maryland, Pennsylvania, and West Virginia. The aircraft carries a blue-and-white box holding instrumentation to detect unseen photon gamma rays created by radioactive decay within the rocks below. When a gamma ray reaches a specially designed crystal inside the box, it produces a flash of light—a reaction called scintillation—that provides information about the gamma ray’s properties and origins.

Measurements of natural, low-level radioactivity have been used in geologic applications for nearly a century.

Scintillation is the foundation of radiometric methods that provide passive and rapid assessments of the geochemical compositions of rock samples, cores, and outcrops, as well as of swaths of Earth’s surface. These methods measure ambient gamma ray energy signatures to determine which isotopes most likely produced them. Such data are then used to create maps of Earth’s surface and near subsurface where radioactive elements are present, even in low amounts.

Measurements of natural, low-level radioactivity have been used in geologic applications for nearly a century. But a new phase of open-access, high-resolution, airborne data collection funded and executed through the U.S. Geological Survey’s (USGS) Earth Mapping Resources Initiative (MRI) is providing novel insights for geologic mapping, critical minerals research, mine waste studies, and other applications.

From Geiger Tubes to Spectrometry

Radiometric methods developed rapidly following the discovery of radioactivity in 1896. Only a few decades later, petroleum explorations in the 1930s made use of Geiger tubes and ionization chambers to measure gamma rays emitted from boreholes. These early methods, which counted the total number of gamma rays detected, couldn’t discern individual radioelements, but they could reveal different sedimentary layers.

By the 1940s, scintillation crystals were light enough that instruments could be carried aboard airplanes for use in total-count radiometric surveys. And by the late 1960s, gamma ray sensors were accurate enough to distinguish specific source isotopes, providing capabilities for full gamma ray spectrometry [Duval, 1980; International Atomic Energy Agency, 2003].

Airborne gamma ray spectrometry provides rapidly and continuously collected geochemical information over large areas that is impossible to obtain from the ground.

During an airborne radiometric survey, an airplane, helicopter, or drone flies back and forth in a “mow-the-lawn” pattern to produce map view estimates of radioelement concentrations. The spatial resolution of the data depends on how closely the survey flight lines are spaced and on flying height: The farther the sensor is from the ground, the wider the area that is imaged—and the lower the resolution—at each point in time.

The results represent gamma rays emitted from roughly the upper 50 centimeters of the ground surface, whether bedrock or soil; shielding of these rays by vegetation is typically limited. Although many radioelements produce gamma rays, potassium, uranium, and thorium are the primary elements evaluated because they are relatively abundant on Earth and their decay sequences generate gamma ray signatures strong enough to be measured by airborne sensors [Minty, 1997; International Atomic Energy Agency, 2003].

Airborne gamma ray spectrometry provides rapidly and continuously collected geochemical information over large areas that is impossible to obtain from the ground. These surveys are often paired with simultaneous collection of magnetic data because the optimal flying speeds and heights are similar for both. These methods, when combined with ground truth geologic observations and sample analyses, offer a powerful tool for geologic mapping.

Resource Exploration Drives Data Collection

The earliest airborne radiometric datasets were total-count surveys collected primarily for uranium exploration by Australia, Canada, the Soviet Union, and the United States immediately after World War II. In the 1970s, continued interest in uranium led to initiation of the National Uranium Resource Evaluation (NURE), which supported airborne gamma ray spectrometry surveys measuring potassium, thorium, and uranium over the conterminous United States and parts of Alaska. These data, along with concurrent magnetic data, were released publicly. Around the same time, similar interest in Australia and Canada motivated regional-scale coverage in those countries.

To achieve national coverage, the NURE surveys were designed with very widely spaced flight lines 5–10 kilometers apart, and only a few areas were chosen for higher-resolution data collection. The data were useful primarily for reconnaissance rather than detailed exploration.

In the decades following the NURE surveys, sensor and processing technology improved remarkably, but only a limited number of public high-resolution radiometric surveys—covering about 1% of the country’s area—were flown in the United States (Figure 1). The lack of radiometric data was even more severe than that of magnetic data, which by 2018 covered almost 5% of the country [Drenth and Grauch, 2019]. Magnetic surveys were more common, perhaps because of their use for mapping buried faults, folds, and other geologic features in studies of mineral resources, natural hazards, and water resources (Figure 1).

Fig. 1. This map shows the areas covered by high-resolution airborne surveys across the conterminous United States before and since the launch of the Earth Mapping Resources Initiative (MRI). Radiometric surveys typically also include magnetic data collection, but the converse is not always the case. (“high resolution” is defined here as “Rank 1” or “Rank 2” using the nomenclature of Johnson et al. [2019] and Drenth and Grauch [2019] for radiometric and magnetic surveys, respectively. These rankings consider a variety of survey conditions, including the flight line spacing, flying height, whether GPS navigation was used, and whether data were recorded digitally.)

Since 2019, Earth MRI has been addressing this data scarcity, with the goal of improving knowledge of domestic critical mineral resources and the geologic regimes, or frameworks, within which they form and concentrate. Critical mineral resources such as lithium, graphite, rare earth elements (REEs), and many others are commodities that are essential for the U.S. economy and security but are at risk from supply chain disruptions. They are key components in numerous technologies, from cell phones and medical devices to advanced defense systems and renewable energy technologies.

These datasets and interpretations can also inform studies in other disciplines, such as of earthquake hazards.

Earth MRI takes a multidisciplinary approach that includes geologic mapping and collection of new data using lidar, airborne geophysical methods, and analyses of sample geochemistry, mineralogy, and geochronology. These datasets and interpretations, all freely and publicly available, provide broad information about critical minerals, their mineralizing systems, and their geologic frameworks. Such information can also inform studies in other disciplines, such as of earthquake hazards, and is especially useful for advising land use planners (e.g., in making decisions about setting areas for natural preservation, grazing, and recreation) and for informing and reducing the economic risk of costly mineral resource exploration.

Magnetic and radiometric data are the foundation of Earth MRI’s airborne geophysical coverage because they provide valuable information about geology, including areas under cover and vegetation, and their relatively low cost enables surveying of large areas. Additional funding from the 2021 Infrastructure Investment and Jobs Act has facilitated targeted studies using both hyperspectral and electromagnetic methods, which provide complementary imaging.

Bird’s-Eye Views of Geology Fig. 2. Airborne radiometric data collected over the Appalachian Valley and Ridge Province in Maryland, Pennsylvania, and West Virginia are shown here using a ternary color scale (magenta = potassium (K), cyan = thorium (Th), yellow = uranium (U)). These data, which are available from the USGS, highlight different lithologies of shallow and outcropping sedimentary layers. The image is draped over a shaded relief image of lidar-derived elevation for context. Click image for larger version.

Heavily folded and faulted sedimentary rocks of the Appalachian Valley and Ridge Province provide a dramatic example of the value of Earth MRI’s data collection for geologic applications. Earth MRI supported new airborne magnetic and radiometric data collection in 2022–2023 in this region to better understand the geologic framework of critical minerals in metal-bearing shales and manganese-iron sedimentary layers (Figure 2).

The data illustrate a diverse array of lithologies in close proximity (sometimes <1 kilometer apart), reflecting the structure and stratigraphy of layered sedimentary rocks. They reveal outcrops of shale formations containing varying amounts of potassium, thorium, or both, highlighting compositional information. Weathered carbonates and carbonate regolith show only elevated levels of potassium, whereas quartz sandstone is mostly devoid of radioelements except for sparse patches of uranium enrichment.

Accurate interpretation of airborne radiometric datasets requires complementary geologic knowledge from other sources because the presence of potassium, thorium, and uranium can be linked to several different minerals. For example, in hard rock terranes, elevated potassium often indicates mica and potassium feldspar in granites, granodiorites, or felsic volcanic rocks. However, elevated potassium may also indicate a history of hydrothermal alteration that formed potassium-rich minerals associated with economically significant ores, such as gold-copper porphyry deposits [e.g., Shives et al., 2000].

Radiometric detections of potassium can illuminate broad transport pathways from sites of erosion to sites of deposition.

In sedimentary environments, elevated potassium measurements may represent minerals such as illite. Or they may indicate recently eroded sands (from which potassium has not been dissolved and mobilized), such as those found in river floodplains. In those scenarios, radiometric detections of potassium can therefore illuminate broad transport pathways from sites of erosion to sites of deposition [Shah et al., 2021].

Colocated magnetic field data can provide needed complementary constraints on geologic interpretations, especially within hard rock terranes. For example, both mafic rocks and quartz sandstone usually show similarly low potassium, thorium, and uranium signatures. However, mafic rocks often express prominent magnetic anomalies, unlike quartz sandstone, allowing scientists to easily distinguish the two.

Critical Mineral Frameworks

In addition to their use for fundamental geologic mapping, new Earth MRI datasets are providing key information on domestic critical minerals—and in some cases imaging them directly. This is especially the case for REEs because many minerals that host REEs also contain thorium. For example, at California’s Mountain Pass, presently the only site of active REE production in the United States, airborne radiometric data show elevated thorium, uranium, and potassium concentrations over mineralized areas [Ponce and Denton, 2019].

Airborne radiometric surveys have also led to discoveries of critical minerals. In one case, data from a remote part of northern Maine revealed a highly localized thorium and uranium anomaly. The finding motivated a subsequent effort in which a multidisciplinary and multi-institutional team quickly investigated the area on foot. By combining geophysical data, geologic mapping, and analyses of rock samples, they discovered an 800- × 400-meter area with high concentrations of REEs, niobium, and zirconium, all considered critical commodities [Wang et al., 2023]. The depth of the mineralization, and thus the potential economic value, is not yet known, but a deposit in Australia with similar rock type, composition, and areal extent has been valued in the billions of dollars.

In another study, researchers used Earth MRI radiometric data collected over Colorado’s Wet Mountains to map REE mineralization in carbonatite dikes, veins in alkaline intrusions, and syenite dikes [Magnin et al., 2023]. Additional analyses of thorium levels and magnetic anomalies provided insights into the geologic environment in which these REE-bearing features formed, namely, that the mineralization likely occurred as tectonic forces stretched and rifted the crust in the area.

And over South Carolina’s Coastal Plain sediments, Shah et al. [2021] imaged heavy mineral sands containing critical commodities: REEs, titanium, and zirconium (Figure 3). These researchers are developing new constraints on critical mineral resource potential within individual geologic formations by evaluating the statistical properties of thorium anomalies.

Fig. 3. Critical minerals in ancient shoreline sands near Charleston, S.C., are highlighted in this map of airborne radiometric thorium data draped over lidar-derived shaded relief topography. Thorium is present in the mineral monazite, which also contains rare earth elements. Detecting Impacts from—and on—Humans

A new frontier in critical mineral studies focuses on the potential to tap unconventional resources, especially those present in mining waste and tailings.

A new frontier in critical mineral studies focuses on the potential to tap unconventional resources, especially those present in mining waste and tailings. Mining and mine waste features are scattered across the United States, sometimes presenting environmental or public health hazards. If critical minerals could be reclaimed economically from waste, proceeds could help to fund cleanup actions.

Early work with airborne data on this frontier focused, for example, on examining anomalous thorium concentrations in tailing piles from abandoned iron mines in the eastern Adirondack Highlands of upstate New York. Researchers found that the piles that contained REEs in the mineral apatite expressed thorium anomalies, whereas other piles were devoid of these critical commodities.

More recently, scientists identified uranium anomalies in datasets collected over stacks of phosphate mining waste, known as phosphogypsum stacks or “gypstacks,” in Florida (Figure 4). And data collected over coal mining waste sites in the Appalachian Mountains show elevated potassium, thorium, and uranium. Mine waste in both these areas is now being studied more closely as possible REE resources.

Fig. 4. Uranium anomalies (yellow and red) highlight mining areas, waste stacks, and, in some areas, dirt roads in this image of airborne radiometric data collected over the phosphate mining district in central Florida. Click image for larger version. Credit: background imagery: Google, Airbus; data: USGS

Radiometric surveys can also shed light on natural geologic hazards that affect human health. Radon gas, a well-known risk factor for lung cancer, is produced from the breakdown of radioelements, especially uranium, in soil and rock. By imaging areas with elevated uranium, radiometric surveys can delineate areas with higher radon risk.

In the 1980s, the U.S. Department of Energy commissioned a total-count survey over a small section of the Reading Prong in Pennsylvania, a geologic unit with known instances of uranium that also extends into New Jersey and New York, to map radon hazards. New Earth MRI datasets collected west of that part of Pennsylvania and elsewhere cover much larger areas and distinguish uranium, thorium, and potassium, providing a means for extensive radon risk evaluation.

Much More to Explore

Earth MRI airborne magnetic and radiometric surveys funded as of September 2025 have provided a roughly 18-fold increase in publicly available high-resolution radiometric data compared with what was available in 2018, and additional surveys are planned for 2026. However, the new total still represents only about 19% of the area of the United States (including Alaska, Hawaii, and Puerto Rico), so there is still a long way to go to achieve full national coverage.

A drone collects radiometric data over mining waste piles in southwestern New Mexico. These and other mine waste piles are being studied to see whether they hold critical mineral resources. Credit: Anjana Shah/USGS, Public Domain

The new open-access data present a wide variety of opportunities for study, from qualitative revisions of geologic maps to quantitative analyses that address questions about critical mineral resources and other societally important topics. These data are also inspiring innovative approaches, such as drone-based surveys using new ultralightweight sensors that can provide unprecedented spatial resolution, with uses in detailed mine waste studies, radon evaluation, and other applications [e.g., Gustafson et al., 2024]. Another new approach combines airborne radiometric data with sample geochemical data to evaluate critical minerals in clays [Iza et al., 2018].

Other novel applications that encourage economic development, maintain national security, and enhance public safety are waiting to be developed and explored.

Acknowledgments

We thank Tom L. Pratt and Dylan C. Connell for helpful reviews. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. government.

References

Drenth, B. J., and V. J. S. Grauch (2019), Finding the gaps in America’s magnetic maps, Eos, 100, https://doi.org/10.1029/2019EO120449.

Duval, J. S. (1980), Radioactivity method, Geophysics, 45(11), 1,690–1,694, https://doi.org/10.1190/1.1441059.

Gustafson, C., et al. (2024), Mine waste identification and characterization using airborne and uncrewed aerial systems radiometric geophysical surveying, Geol. Soc. Am. Abstr. Programs, 56(5), 1-6, https://doi.org/10.1130/abs/2024AM-403640.

International Atomic Energy Agency (2003), Guidelines for Radioelement Mapping Using Gamma Ray Spectrometry Data, IAEA-TECDOC-1363, Vienna.

Iza, E. R. H. F., et al. (2018), Integration of geochemical and geophysical data to characterize and map lateritic regolith: An example in the Brazilian Amazon, Geochem. Geophys. Geosyst., 19(9), 3,254–3,271, https://doi.org/10.1029/2017GC007352.

Johnson, M. R., et al. (2019), Airborne geophysical survey inventory of the conterminous United States, Alaska, Hawaii, and Puerto Rico (ver. 4.0, April 2023), data release, U.S. Geol. Surv., Reston, Va., https://doi.org/10.5066/P9K8YTW1.

Magnin, B. P., Y. D. Kuiper, and E. D. Anderson (2023), Ediacaran-Ordovician magmatism and REE mineralization in the Wet Mountains, Colorado, USA: Implications for failed continental rifting, Tectonics, 42(4), e2022TC007674, https://doi.org/10.1029/2022TC007674.

Minty, B. (1997), Fundamentals of airborne gamma-ray spectrometry, AGSO J. Aust. Geol. Geophys., 17, 39–50.

Ponce, D. A., and K. M. Denton (2019), Airborne radiometric maps of Mountain Pass, California, U.S. Geol. Surv. Sci. Invest. Map, 3412-C, scale 1:62,500, https://doi.org/10.3133/sim3412C.

Shah, A. K., et al. (2021), Mapping critical minerals from the sky, GSA Today, 31(11), 4–10, https://doi.org/10.1130/GSATG512A.1.

Shives, R. B., B. W. Charbonneau, and K. L. Ford (2000), The detection of potassic alteration by gamma-ray spectrometry—Recognition of alteration related to mineralization, Geophysics, 65, 2,001–2,011, https://doi.org/10.1190/1.1444884.

Wang, C., et al. (2023), A recently discovered trachyte-hosted rare earth element-niobium-zirconium occurrence in northern Maine, USA, Econ. Geol., 118(1), 1–13, https://doi.org/10.5382/econgeo.4993.

Author Information

Anjana K. Shah (ashah@usgs.gov), U.S. Geological Survey, Lakewood, Colo.; Daniel H. Doctor, U.S. Geological Survey, Reston, Va.; Chloe Gustafson, U.S. Geological Survey, Lakewood, Colo.; and Alan D. Pitts, U.S. Geological Survey, Reston, Va.

Citation: Shah, A. K., D. H. Doctor, C. Gustafson, and A. D. Pitts (2025), New maps of natural radioactivity reveal critical minerals and more, Eos, 106, https://doi.org/10.1029/2025EO250370. Published on 7 October 2025. Text not subject to copyright in the United States.
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Ice Diatoms Glide at Record-Low Temperatures

Tue, 10/07/2025 - 13:08

Hidden in Arctic sea ice are microscopic organisms that do more than eke out a meager existence on scraps of light filtered through their frozen habitat. New research has shown that ice diatoms have adapted to move efficiently through the ice, allowing them to navigate to better sources of light and nutrients. During in situ and laboratory experiments, ice diatoms glided through the ice roughly 10 times faster than diatoms from temperate climates and kept gliding even at −15°C, the lowest temperature recorded for single-celled organisms.

“People often think that diatoms are at the mercy of their environment,” said Manu Prakash, a bioengineering researcher at Stanford University in California and lead researcher on this discovery. “What we show in these ice structures is that these organisms can actually move rapidly at these very cold temperatures to find just the right home. It just so happens that home is very cold.”

These findings, published in the Proceedings of the National Academy of Sciences of the United States of America, may help scientists understand how microorganisms and polar ecosystems respond to climate change.

Gliding Through Life Researchers drilled several cores from sea ice in the Chukchi Sea to understand the movement patterns of diatoms. Credit: Natalie Cross

Diatoms are microscopic, single-celled algae that photosynthesize. Up to 2 million species of diatoms produce at least 20% of the oxygen we breathe and form the backbone of ecosystems throughout the world, from the humid tropics to the frigid poles. Scientists have known since the 1960s that diatoms live within and move through the ice matrix but have been unable to decipher how they do it.

“Ice is an incredible porous architecture of highways,” Prakash explained. “Light comes from the top in the ice column, and nutrients come from the bottom. There is an optimal location that [a diatom] might want to be, and that can only be possible with motility.” (Motility is the ability of an organism to expend energy to move independently.)

Prakash and a team of researchers sought to observe ice diatoms’ movements in situ and so set off for the Chukchi Sea aboard the R/V Sikuliaq. On a 45-day expedition in 2023, they collected several cores from young sea ice, extracted diatoms from the cores, and studied the diatoms’ movements on and within icy surfaces under a temperature-controlled microscope customized for subzero temperatures.

At temperatures down to −15°C, Arctic ice diatoms actively glided on ice surfaces and within ice channels. The researchers said that this is the lowest temperature at which gliding motility has been observed for a eukaryotic cell.

“Life is not under suspension in these ultracold temperatures. Life is going about its business.”

“Life is not under suspension in these ultracold temperatures,” Prakash said. “Life is going about its business.”

“This is a notable discovery,” said Julia Diaz, a marine biogeochemist at Scripps Institution of Oceanography in San Diego. “These diatoms push the lowest known temperature limit of motility to a new extreme, not just compared to temperate diatoms, but also compared to more distantly related organisms.” Diaz was not involved with this research.

“Since the 1960s, when J. S. Bunt first described sea ice communities and observed that microbes were concentrated in specific layers of the ice, it has been obvious that they must have a means to navigate through ice matrices,” said Brent Christner, an environmental microbiologist at the University of Florida in Gainesville who also was not involved with this research. “This study makes it clear that some microbes traverse gradients in the ice by gaining traction on one of the most slippery surfaces known!”

“While these diatoms are clearly ice specialists, they nevertheless appear to be equipped with the equivalent of all-season tires!”

The team compared the movement of ice diatoms to those of diatoms from temperate climates. On both icy and glass surfaces under the same conditions, ice diatoms moved roughly 10 times faster than temperate diatoms. In cold conditions on icy surfaces, temperate diatoms lost their ability to move completely and just passively drifted along. These experiments show that ice diatoms adapted specifically to their extreme environments, evolving a way to actively seek out better sources of light to thrive.

“I was surprised the ice diatoms were happily as motile on ice as glass, and much faster on glass that the temperate species examined,” Christner said. “While these diatoms are clearly ice specialists, they nevertheless appear to be equipped with the equivalent of all-season tires!”

On ice (left) and on glass (right) surfaces, ice diatoms (top) move faster than temperate diatoms (bottom). All experiments here were conducted at 0°C and are sped up 50 times to highlight the diatoms’ different gliding speeds. Credit: Zhang et al., 2025, https://doi.org/10.1073/pnas.2423725122, CC BY-NC-ND 4.0 Can Diatoms Adapt to Climate Change?

The Arctic is currently experiencing rapid environmental changes, warming several times faster than the rest of the world. Arctic climate change harms not only charismatic megafauna like polar bears, Prakash said, but microscopic ones, too.

“These ecosystems operate in a manner that every one of these species is under threat.”

Diatoms are “the microbial backbone of the entire ecosystem,” Prakash said. “These ecosystems operate in a manner that every one of these species is under threat.”

Prakash added that he hopes future conservation efforts focus holistically on Arctic ecosystems from the micro- to macroscopic. Future work from his own group aims to understand how diatoms’ gliding ability changes under different chemical conditions like salinity, as well as how the diatoms shape their icy environment.

“Scientists used to think that sea ice was simply an inactive barrier on the ocean surface, but discoveries like these reveal that sea ice is a rich habitat full of biological diversity and innovation,” Diaz said. “Sea ice extent is expected to decline as climate changes, which would challenge these diatoms to change the way they move and navigate their polar environment. It is troubling to think of the biodiversity that would be lost with the disappearance of sea ice.”

—Kimberly M. S. Cartier (@astrokimcartier.bsky.social), Staff Writer

Citation: Cartier, K. M. S. (2025), Ice diatoms glide at record-low temperatures, Eos, 106, https://doi.org/10.1029/2025EO250371. Published on 7 October 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Fatal landslides in July 2025

Tue, 10/07/2025 - 06:19

In July 2025, I recorded 71 fatal landslides worldwide, with the loss of 214 lives.

Each year, July is one of the key months for the occurrence of fatal landslides globally as the Asian monsoon season cranks up to full strength. Thus, it is time to provide an update on fatal landslides that occurred in July 2025. This is my dataset on landslides that cause loss of life, following the methodology of Froude and Petley (2018). At this point, the monthly data is provisional. I will, when I have time, write a follow up paper to the 2018 one that describes the situation since then.

In July 2025 I recorded 71 fatal landslides worldwide, with the loss of 214 lives. The average for the period from 2004 to 2016 was 58.1 fatal landslides, so this is considerably higher than the long term mean, although it is much lower than 2024, which saw 99 fatal landslides.

So, this is the monthly total graph for 2025 to the end of July:-

The number of fatal landslides to the end of July 2025 by month.

Plotting the data by pentad to the end of pentad 43 (29 July), the trend looks like this (with the exceptional year of 2024, plus the 2004-2016 mean, for comparison):-

The number of fatal landslides to 29 July 2025, displayed in pentads. For comparison, the long term mean (2004 to 2016) and the exceptional year of 2024 are also shown.

The data shows that the acceleration in the rate of fatal landslides occurred much later in the annual cycle than was the case in 2024. It was only late in the month that the rate started to approach that of 2024. Indeed for much of the month, the fatal landslide rate (the gradient of the line) is broadly similar to the long term mean, albeit with a much higher starting point.

But note also the distinct acceleration late in the month, which makes what then happened in August 2025 particularly interesting. Watch this space.

Notable events included the 8 July 2025 catastrophic debris flow at Rasuwagadhi in Nepal, but no single landslide killed more than 18 people in July 2025.

I often draw a link between the rate of fatal landslides and the surface air temperature. The Copernicus data shows that July 2025 was “0.45°C warmer than the 1991-2020 average for July with an absolute surface air temperature of 16.68°C“. It was the “third-warmest July on record, 0.27°C cooler than the warmest July in 2023, and 0.23°C cooler than 2024, the second warmest.”

Reference

Froude M.J. and Petley D.N. 2018. Global fatal landslide occurrence from 2004 to 2016Natural Hazards and Earth System Science 18, 2161-2181. https://doi.org/10.5194/nhess-18-2161-2018

Return to The Landslide Blog homepage Text © 2023. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Satellite Scans Can Estimate Urban Emissions

Mon, 10/06/2025 - 12:52
Source: AGU Advances

Because the hustle and bustle of cities is driven largely by fossil fuels, urban areas have a critical role to play in addressing global greenhouse gas emissions. Currently, cities contribute around 75% of global carbon dioxide (CO2) emissions, and urban populations are projected only to grow in the coming decades. Members of the C40 Cities Climate Leadership Group, a network of nearly 100 cities that together make up 20% of the global gross domestic product, have pledged to work together to reduce urban greenhouse gas emissions. Most of the cities have pledged to reach net zero emissions by 2050.

To meet these pledges, cities must accurately track their emissions levels. Policymakers in global cities have been relying on a “bottom-up” approach, estimating emissions levels on the basis of activity data (e.g., gasoline sales) and corresponding emissions factors (such as the number of kilograms of carbon emitted from burning a gallon of gasoline). However, previous studies found some regional variations in emissions estimates depending on which datasets are used, especially in certain geographic locations.

Ahn et al. tried a “top-down” approach, using space-based observations to estimate emissions for 54 C40 cities.

They used data from NASA’s Orbiting Carbon Observatory 3 (OCO-3) mission on board the International Space Station (ISS) to collect high-resolution data over global cities. OCO-3 uses a pair of mirrors called the Pointing Mirror Assembly to scan atmospheric CO2 levels as the ISS flies over a target city.

The researchers found that for the 54 cities, the satellite-based estimates match bottom-up estimates within 7%. On the basis of their measurements, the researchers also found that bottom-up techniques tended to overestimate emissions for cities in central East, South, and West Asia but to underestimate emissions for cities in Africa, East and Southeast Asia, Oceania, Europe, and North America.

The team also examined the link between emissions, economies, and populations. They found that wealthier cities tended to have less carbon intensive economies. For example, North American cities emit 0.1 kilogram of CO2 within their boundaries per U.S. dollar (USD) of economic output, whereas African cities emit 0.5 kilogram of CO2 per USD. They also found that residents living in bigger cities emit less CO2—cities with under 5 million people emit 7.7 tons of CO2 per person annually, whereas cities with more than 20 million people emit 1.8 tons per person, for instance.

The authors note that their findings show that satellite data may help cities better track emissions, improve global monitoring transparency, and support global cities’ efforts to mitigate emissions. (AGU Advances, https://doi.org/10.1029/2025AV001747, 2025)

—Sarah Derouin (@sarahderouin.com), Science Writer

Citation: Derouin, S. (2025), Satellite scans can estimate urban emissions, Eos, 106, https://doi.org/10.1029/2025EO250373. Published on 6 October 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Planets Might Form When Dust “Wobbles” in Just the Right Way

Mon, 10/06/2025 - 12:52

To start forming a planet, you need a big disk of dust and gas…and a bit of oomph. We see this formation taking place in protoplanetary disks in young star systems, and the same process must have formed the planets in our own solar system, too.

How do you begin planet formation inside a disk? What is the oomph?

But how do you begin planet formation inside a disk? What is the oomph?

A new set of experiments led by Yin Wang at the Princeton Plasma Physics Laboratory (PPPL) in New Jersey suggests a process called magnetic rotational instability (MRI) may be a contributing factor. MRI describes how magnetic fields interact with the rotating, electrically charged gas in a star’s disk.

MRI has long been thought to play a role in disks by pushing charged gas toward young stars, which consolidate it in a process called accretion. This new research shows MRI can also trigger “wobbles” in the protoplanetary disk that begin the planet formation process.

Taking Metals for a Spin

Traditional ways of accreting dust in a young disk include pressure bumps, said Thanawuth Thanathibodee, an astrophysicist at Chulalongkorn University in Thailand who was not involved in the new research. The bumps are caused by processes such as “the transition between the gas phase and solid phase of some molecules…. When you have a pressure bump, you can accumulate more solid mass, and from there start forming a planet.”

Wang’s paper shows another way the accretion process might begin.

In his team’s experiments at PPPL, a cylinder was placed inside another cylinder, separated by about 32 liters (8.4 gallons) of the liquid metal Galinstan, the brand name of an alloy of gallium, indium, and tin. By spinning the two cylinders at different speeds exceeding 2,000 rotations per minute, scientists churned the liquid metal in a washing machine–like fashion, causing it to swirl through the cavity and mimic how gas swirls in a young star’s disk.

The team measured changes in the magnetic field of the Galinstan as it moved around the cylinders. They found that some regions of the liquid metal would interface, forming what are known as free shear layers. In these layers, some parts slow down and some speed up, a hallmark attribute of MRI.

In a protoplanetary disk, similar layers arise where different parts of the disk’s gas flow meet. These interfaces cause turbulence that pushes material (dust) toward or away from the star and create pockets where dust can accumulate and eventually form planets.

Wang said his work shows MRI-induced wobbling might be happening more often than expected, suggesting “there might be more planets across the universe.”

The work was published in Physical Review Letters earlier this year.

Building on a Successful Experiment

The contribution of MRI to protoplanetary disk formation was previously proposed but was not shown experimentally until now. As such, Thanathibodee said the new work is “very interesting.”

In future experiments, Wang hopes to try different rotation speeds to better understand the free shear layers and examine how MRI is produced. “We’ve found this mechanism is way easier [than thought], but the explored parameter space is still limited,” he said.

Still, MRI isn’t a slam dunk explanation for planet formation. To make the magnetic fields that MRI relies on, the central star must ionize the swirling gas in a protoplanetary disk into a plasma, a process that likely takes place near the star itself. But material close to the star quickly falls onto the star and thus is unavailable to make planets.

If the process instigated by MRI is encountered too close to the star, the researchers found, “the material will be absorbed,” explained Wang. “But if this mechanism happens away from the star, then it helps planet formation.”

MRI must work more quickly than the accretion timescale if it contributes to protoplanetary disk formation, but by how much?

“Nature is complicated, but what our results show is this instability is likely more common than we used to think.”

“My sense is that in order for some planets to form, this [MRI] process needs to be prolonged,” said Thanathibodee. “Otherwise, all the mass will get accreted in a short timescale.”

If MRI does occur in a “sweet region” not too close to or not too far from the young star, said Wang, it could play a role in planet formation. “It’s a plausible candidate for explaining a solar system like ours,” he said. “Nature is complicated, but what our results show is this instability is likely more common than we used to think.”

This same process might drive accretion around black holes too, said Wang, where magnetic fields are much stronger.

—Jonathan O’Callaghan (@astrojonny.bsky.social), Science Writer

Citation: O’Callaghan, J. (2025), Planets might form when dust “wobbles” in just the right way, Eos, 106, https://doi.org/10.1029/2025EO250372. Published on 6 October 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

A late monsoon sting in the tale in the Himalayas

Mon, 10/06/2025 - 07:25

Very heavy rainfall across Nepal, NE. India and Bhutan has triggered landslides that have killed at least 60 people.

Over the last few days, parts of the Himalayas have been hit by very high levels of rainfall, causing large numbers of damaging landslides. The picture is not yet fully clear, but Nepal and Bhutan, and Darjeeling in India, have been particularly badly hit.

Over on the wonderful Save the Hills blog, Praful Rao has documented the rainfall at in Darjeeling – for example, on 4 October 2025 Kurseong recorded 393 mm of rainfall, whilst in Kalimpong a peak intensity of about 150 mm per hour was recorded. The scale of this event is well captured by the Global Precipitation Measurement dataset from NASA – this is 24 hour precipitation to 14:30 UTC on 5 October 2025:-

24 hour precipitation in 14:30 on 5 October 2025 for South Asia. Data from NASA.

News reports from Nepal indicate that 47 people have been killed and more are missing. Of these fatalities, 37 are reported to have been the result of landslides in Ilam. The Kathmandu Post has started to document the events:-

“According to the District Administration Office, five people died in Suryodaya Municipality, six in Ilam Municipality, six in Sandakpur Rural Municipality, three in Mangsebung, eight in Maijogmai, eight in Deumai Municipality, and one in Phakphokthum Rural Municipality. Among the deceased are 17 men and 20 women, including eight children, the office said in its official report.”

The picture in NE India is also dire. In Darjeeling, a series of landslides have killed 23 people. These include 11 fatalities in Mirik and five in Nagrakata. Praful Rao has indicated that he will provide more detail on the landslides in Darjeeling on the Save the Hills blog in due course.

The rains have also caused extensive damage in Bhutan. At least five fatalities have been reported, mostly in “flash floods”. In this landscape, the term flash flood is usually used to describe channelised debris flows.

Of great concern is the reported situation at the Tala Hydroelectric Power Station dam on the Wangchu river in the Chukha district of Bhutan. Reports indicate that water has overflowed the structure due to a failure of the dam gates. According to Wikipedia, this dam is 92 metres tall, so a collapse would be a significant event. This is Bhutan’s largest hydropower facility, and dams are not usually designed to withstand a major overtopping event.

The situation across this region will be unclear for a while, but loyal readers will remember the late monsoon event in Nepal in 2024, in which over 200 people were killed. These events reflect changes in patterns of rainfall associated with anthropogenic climate change and changes in the pattern of vulnerability associated with poor development and construction activities. Neither are likely to improve in the next decade and beyond.

Return to The Landslide Blog homepage Text © 2023. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

The AI Revolution in Weather Forecasting Is Here

Fri, 10/03/2025 - 13:01

Weather forecasting has become essential in modern life, reducing weather-related losses and improving societal outcomes. Severe weather alerts provide vital early warnings that help to protect life and property. And forecasts of temperatures, precipitation, wind, humidity, and other conditions—both extreme and average—support public safety, health, and economic prosperity by giving everyone from farmers and fishers to energy and construction companies a heads-up on expected weather.

However, not all forecasts are created equal, in part because weather prediction is chaotic, meaning small uncertainties in the initial conditions (data) input into weather models can lead to vastly different predicted outcomes. The accuracy of predictions is also affected by the complexity of models, the realism with which atmospheric conditions are represented, how far into the future weather is being forecast, and—at very resolved scales—local geography.

The application of novel artificial intelligence (AI) is providing the latest revolutionary influence on weather forecasting.

The skill and reliability of weather forecasts have steadily improved over the past century. In recent decades, improvements have been facilitated by advances in numerical weather prediction (NWP), growth in computing power, and the availability of more and better datasets that capture Earth’s physical conditions more frequently. The application of novel artificial intelligence (AI) is providing the latest revolutionary influence on forecasting. This revolution is borne out by trends in the scientific literature and in the development of new AI-based tools with the potential to enhance predictions of conditions hours, days, or weeks in advance.

Making the Models

All weather forecasts involve inputting data in the form of observations—readings from weather balloons, buoys, satellites, and other instruments—into models that predict future states of the atmosphere. Model outputs are then transformed into useful products such as daily weather forecasts, storm warnings, and fire hazard assessments.

Current forecasting methods are based on NWP, a mathematical framework that models the future of the atmosphere by treating it as a fluid that interacts with water bodies, land, and the biosphere. Models using this approach include the European Centre for Medium-Range Weather Forecasts’ (ECMWF) Integrated Forecasting System (IFS) model (widely considered the gold standard in modern weather forecasting), the National Center for Atmospheric Research’s Weather Research and Forecasting model, and NOAA’s Global Forecasting System.

NWP models solve fluid dynamics equations known as the Navier-Stokes equations that simplify the complex motions of fluids, such as air in the atmosphere, and can be used to describe relationships among their velocities, temperatures, pressures, and densities. The result is a set of predictions of what, for example, temperatures will be at given places at some point in the future. These predictions, together with estimates of other simplified physical processes not captured by fluid dynamics equations, make up a weather forecast.

This conceptually simple description obscures the massive scale of the work that goes into creating forecasts (Figure 1). Operating satellites, radar networks, and other necessary technology is expensive and requires substantial specialized expertise. Inputting observations from these disparate sources into models and getting them to work together harmoniously—no easy task—are a field of study unto themselves.

Fig. 1. An enormous amount of work and expertise go into producing weather forecasts. In brief, observations from multiple sources are combined and used to inform forecasting models, and the resulting model outputs are converted into forecasts that are communicated to the public. Artificial intelligence (AI) can be applied in many ways through this process.

Furthermore, forecast models are complicated and require some of the most powerful—not to mention expensive and energy-intensive—supercomputers in the world to function. Expert meteorologists are required to interpret model outputs, and communications teams are needed to translate those interpretations for the public.

The input-model-output structure for forecasting will be familiar to students of computer science. Indeed, the two fields have, in many ways, grown up together. The Navier-Stokes approach to weather forecasting first became truly useful when computing technology could produce results sufficiently quickly beginning in the 1950s and 1960s—after all, there is no point in having a forecast for 24 hours from now if it takes 36 hours to make!

The Rise of Machine Learning

As the power of computing hardware and software has increased, so too have the accuracy, resolution, and range of forecasting. The advent of early AI systems in the 1950s, which weather services adopted almost immediately, fed this advancement through the mid-20th century. These early AIs were hierarchical systems that mimicked human decisionmaking through decision trees comprising a series of “if this, then that” logic rules.

Interest in AI among forecasters picked up starting in the late 1990s and grew steadily into the 2000s and 2010s as computing resources became more powerful and useful data became more widely available.

The development of decision trees was followed by the emergence of machine learning (ML), a subdiscipline of AI involving training models to perform specific tasks without explicit programming. Instead of following coded instructions, these models learn from patterns in datasets to improve their performance over time. One method to achieve this improvement is to train a neural network, an algorithm said to be inspired by the human brain. Neural networks work by iteratively processing numerical representations of input data—image pixel brightnesses, temperatures, or wind speeds, for example—through multiple layers of mathematical operations to reorganize and refine the data until a meaningful output is obtained.

Even though experiments with ML have been ongoing within the wider scientific community since the 1970s, they initially failed to catch on as much more than a novelty in weather forecasting. AI systems at the time were limited by the computing power and relevant data available for use in ML. However, interest in AI among forecasters picked up starting in the late 1990s and grew steadily into the 2000s and 2010s as computing resources became more powerful and useful data became more widely available.

Model training methods also grew more efficient, and new ideas on how to adapt the original neural network concept created opportunities to tackle more complicated tasks. For example, 2010 saw the release of ImageNet, a huge database of labeled images that could be used to train AIs for 2D image recognition tasks.

Machine Learning Moves into Weather Forecasting

Weather forecasting is feeling the impact of this innovation. The growth of AI in research on nowcasting—forecasts of conditions a couple of hours in advance—and short-range weather forecasting up to a day or two out helps to reveal how.

We informally surveyed studies published between 2011 and 2022 using the Web of Science database and found that most of this research focused on applying AI to studies of classical weather forecast variables: precipitation, clouds, solar irradiation, wind speed and direction, and temperature (Figure 2).

Fig. 2. The number of newly published scientific studies concerning the use of AI in nowcasting or short-range weather forecasting grew substantially from 2011 to 2022. In this plot, the studies are divided according to their focus on five variables of interest. Credit: Authors; data included herein are derived from Clarivate’s Web of Science database (©Clarivate 2024. All rights reserved.)

The annual growth of new publications related to these five forecast variables indicates startling year-over-year growth averaging 375% over this period. This nearly fivefold annual increase is split about evenly across each variable: In 2010, the numbers of new publications addressing each of these variables were in the low single digits; by 2022, the numbers for each were in the hundreds.

Research in just a few nations drove most of this growth. Roughly half the papers published from 2011 to 2022 emerged from China (27.5%) and the United States (22.7%). India (~8%), Germany (~6.5%), and the United Kingdom and Australia (~5% each) also contributed significantly. Most, if not all, of this research output appears to be linked to interest in its relevance for or application to various economic sectors traditionally tied to weather forecasting, such as energy, transportation, and agriculture.

Fig. 3. The five most popular variables (left) are matched (by keyword association) to major economic sectors. Credit: Diagram created by authors using SankeyMATIC; data included herein are derived from Clarivate’s Web of Science database (©Clarivate 2024. All rights reserved.)

We determined links in the published studies by associating keywords from these sectors with the five forecast variables (Figure 3). This approach has limitations, including potential double counting of studies (e.g., because the same AI model may have multiple uses), not accounting for the relative sizes of the sectors (e.g., larger sectors like energy are naturally bigger motivators for research than smaller ones like fisheries), and not identifying proprietary research and models not released to the public. Nonetheless, the keyword associations reveal interesting trends.

For example, applications in the energy sector dominate AI forecasting research related to solar irradiance and wind. Comprehensive reviews have covered how AI technologies are being integrated into the energy industry at many stages in the supply chain. Choosing and planning sites (e.g., for solar or wind farms), management of solar and wind resources in day-to-day operations, predictive maintenance, energy demand matching in real time, and management of home and business consumers’ energy usage are all use cases in which AI is affecting the industry and driving research.

Applications in the agricultural sector are primarily driving research into temperature and precipitation forecasting.

Meanwhile, applications in the agricultural sector are primarily driving research into temperature and precipitation forecasting. This trend likely reflects the wider movement in the sector toward precision agriculture, a data-driven approach intended to boost crop yields and sustainability. Large companies, such as BASF, have promoted “digital farming,” which combines data sources, including forecasts and historical weather patterns, into ML models to predict future temperatures and precipitation. Farmers can then use these predictions to streamline operations and optimize resource usage through decisions about, for example, the best time to fertilize or water crops.

The construction industry, a significant driver of temperature forecasting research using AI, relies on temperature forecasts to plan operations. Weather can substantially influence project durations by affecting start dates and the time required for tasks such as pouring concrete. Accurate forecasts can also improve planning for worker breaks on hot days and for anticipating work stoppages during hard freezes.

In the transportation and aviation sectors, public safety concerns are likely driving AI-aided forecasting research. Intelligent transportation systems rely on weather forecast data to predict and mitigate road transportation problems through diversions or road and bridge closures. Similarly, accurate weather data can power aviation models to improve safety and comfort by, for example, predicting issues such as turbulence and icing.

Evolving Architectures

The methods and structures, or architectures, used in AI-based forecasting research have changed and grown more sophisticated as the field has advanced, particularly over the past decade (Figure 4). And this trajectory toward improvement appears to be accelerating.

Fig. 4. Significant growth and change in the AI/machine learning architectures used in the scientific literature on nowcasting or short-range weather forecasting occurred between 2011 and 2022. RNN = recurrent neural network; CNN = convolutional neural network; GAN = generative adversarial network; SVM = support vector machine; ELM = extreme learning machine. Credit: Authors; data included herein are derived from Clarivates Web of Science database (©Clarivate 2024. All rights reserved.)

In 2015, roughly 40% of AI models in the literature for nowcasting and short-range weather forecasting were support vector machines, but by 2022, this figure declined to just 8%. Over the same period, the use of more sophisticated convolutional neural networks ballooned from 11% to 43%. Newer architectures have also emerged for forecasting, with generative adversarial networks, U-Net, and transformer models gaining popularity.

Transformers, with their powerful attention mechanisms that detect long-range dependences among different variables (e.g., among atmospheric conditions and the formation of storms), may be on a course to become the preferred architecture for weather forecasting. Transformers have been widely adopted in other domains and have become synonymous with AI in general because of their prominent use in generative AI tools like OpenAI’s ChatGPT.

Some of today’s most advanced weather forecasting models make use of transformer models, rather than being based on numerical weather prediction.

Some of today’s most advanced weather forecasting models make use of transformer models, such as those from NVIDIA (FourCastNet), Huawei (Pangu-Weather), and Google (GraphCast), each of which is data driven, rather than being based on NWP. These models boast levels of accuracy and spatial resolution similar to ECMWF’s traditional IFS model across several important weather variables. However, their major innovation is in the computing resources required to generate a forecast: On the basis of (albeit imperfect) comparisons, NVIDIA estimates, for example, that FourCastNet may be up to 45,000 times faster than IFS, which equates to using 12,000 times less energy.

A View of the Future

Combining high-resolution data from multiple sources will be core to the weather forecasting revolution, meaning the observational approaches used to gather these data will play a central role.

Sophisticated AI architectures are already being used to combine observations from different sources to create new products that are difficult to create using traditional, physics-based methods. For example, advanced air quality forecasting tools rely on combining measurements from satellites and monitoring stations and ground-level traffic and topography data to produce realistic representations of pollutant concentrations. AIs are also being used for data assimilation, the process of mapping observations to regularly spaced, gridded representations of the atmosphere for use in weather forecast models (which themselves can be AI driven).

Another growing use case for AI is forecasting extreme weather. Extreme events can be challenging for AI models to predict because many models function by searching for patterns (i.e., averages) in data, meaning rarer events are inherently weighted less. Researchers have suggested that the most state-of-the-art AI weather forecasts have significantly underperformed traditional NWP counterparts in predicting extreme weather events, especially rare events such as category 5 hurricanes. However, improvements are in the works. For example, compared with traditional methods, Microsoft’s Aurora model boasts improved accuracy for Pacific typhoon tracks and wind speeds during European storms.

Whether scientists are using fully data-driven AI or so-called hybrid systems, which combine AI and traditional atmospheric physics models, predictions of weather events and of likely outcomes of those events (e.g., fires, floods, evacuations) need to be combined reliably and transparently. One example of a hybrid system blending physics and AI elements is Google’s Flood Hub, which integrates traditional modeling with AI tools to deliver early extreme flood warnings freely in 80 countries. Such work is an important part of the United Nations’ Early Warnings for All initiative, which aims to ensure that all people have actionable access to warnings and information about natural hazards.

Observations will play a key role in facilitating the growing accuracy and efficiency of new forecasting products by providing the data needed to train AI models.

Observations will play a key role in facilitating the growing accuracy and efficiency of new forecasting products by providing the data needed to train AI models. Today, models are generally pre-trained before use with a structured dataset generated from data assimilation methods. These pre-trained systems could be tailored to new specialized tasks at high resolution, such as short-range forecasting for locations where conditions change rapidly, like in high mountain ranges.

Satellites, such as the recently launched Geostationary Operational Environmental Satellite 19 (GOES-19) and NOAA-21 missions, have been an increasingly critical source of data for training AI. These data will soon be supplemented with even higher-resolution observations from next-generation satellite instruments such as the European Organisation for the Exploitation of Meteorological Satellites’ (EUMETSAT) recently launched Meteosat Third Generation (MTG) and EUMETSAT Polar System – Second Generation (EPS-SG) programs. NOAA’s planned Geostationary Extended Observations (GeoXO) and Near Earth Orbit Network (NEON) programs will further boost both traditional and AI modeling.

Looking farther ahead, some experiments have attempted to fully replace traditional data assimilation systems, moving directly from observations to gridded forecast model inputs. A natural end point could be a fully automated, end-to-end weather forecast system, potentially with multiple models working together in sequence. Such a system would process observations into inputs for forecast models, then run those models and process forecast outputs into useful products.

The effects of the AI revolution are beginning to be felt across society, including in key sectors of the economy such as energy, agriculture, and transportation. For weather forecasting, AI technology has the potential to streamline observational data processing, use computational resources more efficiently, improve forecast accuracy and range, and even create entirely new products. Ultimately, current technologies and coming innovations may save money and help better protect lives by seamlessly delivering faster and more useful predictions of future conditions.

Author Information

Justin Shenolikar (justin.shenolikar@iup.uni-heidelberg.de), European Organisation for the Exploitation of Meteorological Satellites, Darmstadt, Germany; now at Universität Heidelberg, Germany; and Paolo Ruti and Chris Yoon Sang Chung, European Organisation for the Exploitation of Meteorological Satellites, Darmstadt, Germany

Citation: Shenolikar, J., P. Ruti, and C. Y. S. Chung (2025), The AI revolution in weather forecasting is here, Eos, 106, https://doi.org/10.1029/2025EO250363. Published on 3 October 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

The evolution of the Matai’an landslide dam

Fri, 10/03/2025 - 07:10

Some excellent before and after imagery is now available showing the evolution of the Matai’an landslide dam.

The active GIS/spatial analysis community in Taiwan has produced some fascinating analysis of the Matai’an landslide. Much of this has been posted to Facebook (which is not my favourite platform, but sometimes you have to go where the information resides).

Tony Lee has produced an incredibly interesting comparison of the dam before and after the overtopping and breach event, based upon imagery captured before the event on 18 August and after the event on 25 September. Unfortunately, WordPress really doesn’t like Facebook embeds, so you’ll need to follow this link:

Tony Lee Facebook post

This is a still from the video:-

Before and after images of the Matai’an landslide dam. Video by Tony Lee, posted to Facebook.

The depth and scale of the incision is very clear – the flow clearly rapidly cut into and eroded into the debris. It has left very steep slopes on both sides in weak and poorly consolidated materials.

So, a very interesting question will now pertain to the stability of these slopes. How will they perform in conditions of intense rainfall and/or earthquake shaking? Is there the potential for a substantial slope failure on either side, allowing a new (enlarged) lake to form.

This will need active monitoring (InSAR may well be ideal). The potential problems associated with the Matai’an landslide are most certainly not over yet.

Return to The Landslide Blog homepage Text © 2023. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Old Forests in the Tropics Are Getting Younger and Losing Carbon

Thu, 10/02/2025 - 13:10

The towering trees of old forests store massive amounts of carbon in their trunks, branches, and leaves. When these ancient giants are replaced by a younger cohort after logging, wildfire, or other disturbances, much of this carbon stock is lost.

“We wanted to actually quantify what it means if an old forest becomes young.”

“We’ve known for a long time that forest age is a key component of the carbon cycle,” said Simon Besnard, a remote sensing expert at the GFZ Helmholtz Centre for Geosciences in Potsdam, Germany. “We wanted to actually quantify what it means if an old forest becomes young.”

The resulting study, published in Nature Ecology and Evolution, measured the regional net aging of forests around the world across all age classes between 2010 and 2020, as well as the impact of these changes on aboveground carbon.

To do this, the team developed a new high-resolution global forest age dataset based on more than 40,000 forest inventory plots, biomass and height measurements, remote sensing observations, and climate data. They combined this information with biomass data from the European Space Agency and atmospheric carbon dioxide observations.

The results point to large regional differences. While forests in Europe, North America, and China have aged during this time, those in the Amazon, Southeast Asia, and the Congo Basin were younger in 2020 than 10 years prior.

A number of recent studies have shown that forests are getting younger, but the new analysis quantifies the impact of this shift on a global level, said Robin Chazdon, a tropical forest ecologist at the University of the Sunshine Coast in Queensland, Australia, who was not involved in the study. “That’s noteworthy and a very important concept to grasp because this has global implications, and it points out where in the world these trends are strongest.”

Carbon Impact

The study identifies the tropics, home to some of the world’s oldest forests, as a key region where younger forests are replacing older ones.

In this image from 2020, old-growth forests are most evident in tropical areas in South America, Africa, and Southeast Asia. Credit: Besnard et al., 2021, https://doi.org/10.5194/essd-13-4881-2021, CC BY 4.0

On average, forests that are at least 200 years old store 77.8 tons of carbon per hectare, compared to 23.8 tons per hectare in the case of forests younger than 20 years old.

The implications for carbon sequestration are more nuanced, however. Fast-growing young forests, for instance, can absorb carbon much more quickly than old ones, especially in the tropics, where the difference is 20-fold. But even this rate of sequestration is not enough to replace the old forests’ carbon stock.

Ultimately, said Besnard, “when it comes to a forest as a carbon sink, the stock is more important than the sink factor.”

“It’s usually more cost-, carbon-, and biodiversity-effective to keep the forest standing than it is to try to regrow it after the fact.”

In the study, only 1% of the total forest area transitioned from old to young, primarily in tropical regions. This tiny percentage, however, accounted for more than a third of the lost aboveground carbon documented in the research— approximately 140 million out of the total 380 million tons.

“It’s usually more cost-, carbon-, and biodiversity-effective to keep the forest standing than it is to try to regrow it after the fact. I think this paper shows that well,” said Susan Cook-Patton, a reforestation scientist at the Nature Conservancy in Arlington, Va., who was not involved in the study. “But we do need to draw additional carbon from the atmosphere, and putting trees back in the landscape represents one of the most cost-effective carbon removal solutions we have.”

The increased resolution and details provided by the study can help experts better understand how to manage forests effectively as climate solutions, she said. “But forest-based solutions are not a substitute for fossil fuel emissions reductions.”

Open Questions

When carbon stored in trees is released into the atmosphere depends on what happens after the trees are removed from the forest. The carbon can be stored in wooden products for a long time or released gradually through decomposition. Burning, whether in a forest fire, through slash-and-burn farming, or as fuel, releases the carbon almost instantly.

“I think there is a research gap here: What is the fate of the biomass being removed?” asked Besnard, pointing out that these effects have not yet been quantified on a global scale.

Differentiating between natural, managed, and planted forests, which this study lumps together, would also offer more clarity, said Chazdon: “That all forests are being put in this basket makes it a little bit more challenging to understand the consequences not only for carbon but for biodiversity.”

She would also like to see future research on forest age transitions focus on issues beyond carbon: “Biodiversity issues are really paramount, and it’s not as easy to numerically display the consequences of that as it is for carbon.”

“We are only looking at one metric, which is carbon, but a forest is more than that. It’s biodiversity, it’s water, it’s community, it’s many things,” agreed Besnard.

—Kaja Šeruga, Science Writer

Citation: Šeruga, K. (2025), Old forests in the tropics are getting younger and losing carbon, Eos, 106, https://doi.org/10.1029/2025EO250369. Published on 2 October 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

机器学习模拟千年气候

Thu, 10/02/2025 - 13:10
Source: AGU Advances

This is an authorized translation of an Eos article. 本文是Eos文章的授权翻译。

近年来,科学家们发现,基于机器学习的天气模型可以比传统模型更快地做出天气预测,且使用更少的能耗。然而,许多这些模型无法准确预测未来15天以上的天气,并且到第 60 天时就会开始模拟出不切实际的天气。

深度学习地球系统模型(Deep Learning Earth System Model,简称DLESyM)建立在两个并行运行的神经网络上:一个模拟海洋,另一个模拟大气。在模式运行期间,对海洋状况的预测每四个模式日更新一次。由于大气条件演变得更快,对大气的预测每12个模式小时更新一次。

该模型的创建者Cresswell-Clay 等人发现,DLESyM 与过去观测到的气候非常吻合,并能做出准确的短期预测。以地球当前的气候为基准,它还可以在不到 12 小时的计算时间内,准确模拟 1000 年周期内的气候和年际变化。它的性能通常与基于耦合模式比对计划第六阶段(CMIP6)的模型相当,甚至优于后者,CMIP6目前在计算气候研究中被广泛使用。

DLESyM 模型在模拟热带气旋和印度夏季季风方面优于 CMIP6 模型。它至少与 CMIP6 模型一样准确地捕捉了北半球大气“阻塞”事件的频率和空间分布,而这些事件可能导致极端天气。此外,该模型预测的风暴也非常真实。例如,在 1000 年模拟结束时(3016 年)生成的东北风暴的结构与 2018 年观测到的东北风暴非常相似。

然而,新模型和CMIP6 模型都无法很好地描述大西洋飓风 的气候特征。此外,对于中期预报(即未来 15 天左右的预报),DLESyM 的准确性低于其他机器学习模型。尤其重要的是,DLESyM 模型仅对当前气候进行模拟,这意味着它没有考虑人类活动引起的气候变化。

作者认为,DLESyM模型的主要优势在于,它比运行CMIP6 模型所需的计算成本要低得多,这使得它比传统模型更容易使用。(AGU Advances, https://doi.org/10.1029/2025AV001706, 2025)

—科学撰稿人Madeline Reinsel

This translation was made by Wiley. 本文翻译由Wiley提供。

Read this article on WeChat. 在微信上阅读本文。

Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

The aftermath of the Matai’an landslide and dam breach in Taiwan

Thu, 10/02/2025 - 07:54

Good digital data is now being published that presents the scale of landscape change that occurred as a result of the Matai’an landslide hazard cascade. There is also interesting information about the root causes of the vulnerability of the town of Guangfu, where the fatalities occurred.

Some interesting information is now emerging about the Matai’an landslide and dam breach, much of it published in Taiwan in Mandarin. A very interesting post has appeared on the website of the Aerial Survey and Remote Sensing Branch that uses aerial imagery before and after the hazard cascade to analyse terrain changes. It is based upon this figure that they have published:-

Vertical elevation change before and after the Matai’an landslide and dam breach. Published by ASRS in Taiwan.

This uses LIDAR data from before and after the sequence of events, which has been turned into one metre Digital Elevation Model, which have then been digitally compared. Note this gives vertical change.

In the source area of the landslide, where the topography is extremely steep, there is over 300 metres of elevation reduction. Downslope and in the area of the dam and lake, the elevation change is over 200 m of accumulation – this is the landslide debris, whivch will now be mobilised in successive rain storm events. In the main channel, the river bed has aggraded (increased in elevation) by over ten metres, although the analysis shows that at point C this was 52 metres! This is going to cause very substantial issues in the future unless a large scale mitigation exercise is undertaken.

The cross-section through the landslide is fascinating:-

A cross-section showing vertical elevation change before and after the Matai’an landslide and dam breach. Published by ASRS in Taiwan.

This shows extremely well the rupture surface of the failure, which clearly had a rotational element, and the infilling of the bedrock topography by the landslide debris. Meanwhile, there is a good helicopter video on Facebook that shows the aftermath of the dam breach.

On a different matter, there is a huge amount of discussion in Taiwan as to why so little effort was made to mitigate the hazard associated with a breach of the Matai-an landslide dam. Writing in the Taipei Times, Michael Turton has a great article exploring the socio-political reasons why this disaster played out as it did. The bottom line is that Guangfu was built on a floodplain – a problem in so many places, but particularly acute in the almost uniquely dynamic physical geography of Taiwan. Levees were built to protect the town, which caused the river to aggrade even before the dam break event. And thus, the scene was set.

Hazards can be natural, disasters are not.

Return to The Landslide Blog homepage Text © 2023. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Science Agencies Shuttered in Government Shutdown

Wed, 10/01/2025 - 15:21
body {background-color: #D2D1D5;} Research & Developments is a blog for brief updates that provide context for the flurry of news regarding law and policy changes that impact science and scientists today.

At 12:01 a.m. this morning, the U.S. federal government shut down. This shutdown comes after weeks of negotiations and pressure tactics failed to bring Congressional Republicans and Democrats together on a budget for the 2026 fiscal year or a continuing resolution to fund the government for a few more weeks.

The federal government has experienced numerous shutdowns over the past decade, the longest of which happened during the first Trump administration and lasted 35 days.

This shutdown, however, may be different, and far more devastating, for the federal workers, including scientists, who live and work across the nation.

In a typical shutdown, employees and contractors who are deemed nonessential to government function, including most workers at science and science-adjacent agencies, are furloughed (temporarily suspended) without pay. Those whose jobs are deemed essential work without pay. Employees receive backpay when the shutdown lifts, but contractors do not.

As of this morning, the shutdown has been proceeding as before.

“The plan to exploit a shutdown to purge federal workers is illegal, unconstitutional, and deeply disturbing.”

But experts are watching how the Trump administration proceeds, as, earlier this week, it ordered all agencies to prepare plans for mass firings and reductions in force (RIFs), not furloughs, should a shutdown occur. According to the White House’s Office of Personnel Management, RIF plans must work within the budget outlined by the President’s Budget Request (PBR). On top of this, thousands of federal workers took offers of deferred resignation earlier this year and have been on paid leave for months. With the shutdown, they may be officially out of jobs.

Exceptions to the shutdown include departments that align with the president’s agenda and received money from his domestic policy megabill, such as the Department of Defense and the Department of Homeland Security, along with a few essential services like Medicare and Social Security.

Trump had doubled down on the threat to fire federal employees yesterday afternoon, which spurred a set of federal employee unions to file a lawsuit alleging that the threats are an unlawful abuse of power.

“The plan to exploit a shutdown to purge federal workers is illegal, unconstitutional, and deeply disturbing,” Tim Whitehouse, the executive director of Public Employees for Environmental Responsibility, said in a statement. “To weaponize it as a tool to destroy the civil service would mark a dangerous slide into lawlessness and further consolidate power in the Executive Branch.”

 
Related

These mass firing plans, poised to radically downsize and reshape the federal government, have not yet been implemented and it’s unclear if or when that will change. In preparation for possible firings, the Interior Department instructed employees to take home government laptops and cellphones to be able to receive updates.

Nonetheless, until this shutdown is resolved many federal science agencies have largely ceased operations or are working with very limited capacity. Some agencies that have submitted revised shutdown plans, like NOAA and the U.S. Geological Survey, have not yet received approval for the plans, leaving significant uncertainty about what parts of an agency will be allowed to legally operate.

Below is a nonexhaustive list of science-related agencies and how they are being affected by the shutdown.

  • Environmental Protection Agency (EPA): An updated contingency plan from the EPA, posted 30 September, is much the same as in past years. Research at the EPA was already suffering: Staff cuts to the agency’s research arm, the Office of Research and Development, are expected to set back much of the agency’s research into environmental hazards, for example.
    • Under the plan, about 89% of EPA staff are now furloughed.
    • The plan calls for a cessation of new grants, updates to the EPA website and communications, all Superfund cleanup activities not necessary to safeguard human lives, inspections of industrial sites, and issuance of permits. Any research and publication activities not deemed necessary to maintain critical operations (such as care for lab animals, plants, and maintenance of instrumentation) must cease as well. Although not mentioned in the current plan, The New York Times notes that during past shutdowns, most employees responsible for monitoring pollution and ensuring industry compliance were furloughed.
    • Past EPA employees think the shutdown could also derail administrator Lee Zeldin’s plans to restructure the agency and revoke landmark EPA rules, such as the 2009 Endangerment Finding.
  • National Aeronautics and Space Administration (NASA):
    • Per NASA’s shutdown plan, less than 17% of essential personnel will remain at work, tasked with protecting mission-critical assets such as spacecraft in orbit, astronauts aboard the International Space Station, and other safety operations. Research activities, educational support, and NASA Center tours will cease. NASA Television and the NASA.gov website will not be updated. The agency has requested an exemption from furlough for operations related to upcoming Artemis missions. Although a bipartisan group of lawmakers included a request in a proposed continuing resolution that NASA follow funding guidelines set in the appropriations bill passed by the House of Representatives, for now NASA is following the more severe PBR. Federal whistleblowers recently reported that NASA was illegally implementing the PBR before now, so this shutdown might lead to many spacecraft and their operators being terminated.
    • Proposals for the next observing cycle of the James Webb Space Telescope are due 15 October. The Space Telescope Science Institute has extended the deadline for scientists affected by the shutdown.
  • National Oceanic and Atmospheric Administration (NOAA) and National Weather Service (NWS): NWS was chronically understaffed before January 2025 and staffing problems have only gotten worse this year. The current shutdown will likely deepen the existing strain on NWS staff and slow down the hiring process for new meteorologists and forecasters.
    • NWS will continue to issue weather warnings and watches, including those related to developing Atlantic storms. NWS and NOAA tours, outreach, and educational activities will cease. Hurricane Hunter crew and maintenance workers are exempted from being furloughed. Flights are expected to continue. Many employees who operate NOAA satellites are exempted from being furloughed. NOAA satellite data should continue to flow. Most NOAA research activities will cease.
    • If NOAA implements firings in line with the PBR, research related to climate, weather and air chemistry, habitat conservation, ocean science, coastal conservation, and the Great Lakes would be eliminated, as would the Office of Oceanic and Atmospheric Research (OAR).
  • National Park Service (NPS): The most recent NPS shutdown contingency plan is from March 2024.
    • Activities related to law enforcement, emergency response, fire suppression and monitoring, and public safety should continue. Most national parks are not expected to close. However, some former park superintendents have asked people not to visit due to safety concerns and bad public behavior during past shutdowns. Visitor centers, bathrooms, trash collection, and park ranger services are now unavailable in most locations. No staff are maintaining trails, clearing brush, or monitoring wildlife. The majority of NPS staff are furloughed and some may soon be laid off.
    • Access to some wildlife refuges has been restricted.
  • National Science Foundation (NSF):
    • According to a 2023 contingency plan for the agency, no new grants, cooperative agreements, or contracts are being awarded, and no new funding opportunities issued. The agency’s plan also calls for responses to any questions about upcoming grant deadlines to pause, so calls and emails won’t be answered. Scientists are still free to complete work that has already been funded, and the Award Cash Management Service, responsible for disbursing already-awarded funds, will still operate. However, funding decisions have been halted or delayed. Websites such as Grants.gov and Research.gov remain operational and will accept materials, but processing of those materials will be delayed.
    • NSF scientists temporarily working at the agency but paid by their home institutions are continuing to work.
  • U. S. Forest Service (USFS):
    • A 2024 contingency plan from the agency calls for more than half its staff to remain active, as thousands of employees have been deemed necessary to protect life and property. Some USFS work to manage forests, such as reducing hazardous fuels, running fire training, planting new trees, or supervising controlled burns, will continue. However, the 2024 plan states that an extended shutdown could delay these activities, possibly impacting fire risk over hundreds of thousands of acres of forest as windows of favorable burn conditions dwindle.
    • Per the 2024 plan, USFS science, including experiments that rely on specific timing, such as prescribed burn studies, may face delay or cessation.
  • U.S. Geological Survey (USGS):
    • In the past, USGS shutdown plans have called for all employees who are not deemed necessary to protect human lives and property to be furloughed, resulting in about half of the agency’s staff temporarily losing their paychecks. According to past contingency plans, some research activities at USGS are supported by supplemental funding from laws such as the Infrastructure Investment and Jobs Act and the Inflation Reduction Act. Such projects can continue. However, much of the USGS’s monitoring and analyzing of Earth systems and natural resources will cease.
    • Online access to USGS maps, publications, and data may be limited, including water quality data and Landsat data critical for emergency response, agriculture, Earth science research, and more.

“It’s incredibly difficult to predict what the federal research enterprise might look like on the other side.”

We don’t know how long this shutdown will last. But the Office of Management and Budget’s posture means “there are likely to be more questions than answers about the operating status of science agencies,” Cole Donovan, associate director of science & technology ecosystems at the Federation of American Scientists, wrote in an email to Eos. “It’s incredibly difficult to predict what the federal research enterprise might look like on the other side.”

Eos will be following news related to this shutdown and monitoring impacts to the federal workforce and larger scientific community. If you have a tip, suggestion, or personal story to share about how this shutdown has affected you, please email us at eos@agu.org.

–Grace van Deelen (@gvd.bsky.social) and Kimberly M. S. Cartier (@astrokimcartier.bsky.social), Staff Writers

These updates are made possible through information from the scientific community. Do you have a story about how changes in law or policy are affecting scientists or research? Send us a tip at eos@agu.org. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Scientists May Have Finally Detected a Solid Inner Core on Mars

Wed, 10/01/2025 - 12:51

Almost a decade after NASA’s InSight mission put the first working seismometer on the Martian surface, researchers are still combing through its records of faint ground vibrations to reveal secrets of the planet’s deep interior.

In a recent analysis, scientists reported seismic evidence that Mars has a solid inner core, an unexpected finding that challenges earlier studies that suggested the planet’s core was entirely molten.

Like Earth—and onions and ogres—the interior of Mars has layers. These layers have different densities and can be solid or liquid. As seismic waves move through the layers, they are bent or reflected, especially at boundaries where density changes sharply. By analyzing how these waves propagate, scientists can trace their paths and infer the structure and properties of the materials they pass through.

Previous analyses of InSight data had already mapped the structure of the Martian crust and mantle and also revealed that the planet has a surprisingly large molten metallic core, spanning nearly half its radius. Such a large core, combined with measurements of the planet’s relatively low density, suggested that it must contain a lot of light elements such as sulfur, carbon, hydrogen, and oxygen. These light elements lower iron’s melting point, making it less likely to crystallize to form a solid inner core, which partly explains why the new finding caught InSight scientists off guard.

“None of us really believed that you would have a solid inner core,” said Amir Khan, a geophysicist at ETH Zurich who is part of the InSight science team but wasn’t involved in the new study.

A Long Way to the Core

Still, seismologist Daoyuan Sun of the University of Science and Technology of China in Hefei and his colleagues decided to look for signs of a solid core in the publicly available InSight data. Specifically, they reexamined data from a set of 23 marsquakes with seismic waves that passed through the planet’s core before returning to the surface.

To enhance the faint signals from the seismometer, the team combined—or stacked—recordings from these quakes. This revealed two types of compressional (P) waves that crossed the core. One set, known as P′P′ waves, traveled through the outer core to the farside of the planet, reflected off the surface there, and then passed back through the core to reach the seismometer. The other set, called PKKP waves, passed through the outer and inner core before being reflected back to the surface and encountering the core-mantle boundary on the way out.

“To me that’s the most exciting thing. That’s basically saying that you see this inner core structure. ”

Initially, the researchers could not find the PKKP waves at their expected arrival times. Instead, the waves were arriving 50–200 seconds earlier than predicted if the core was fully molten. The early arrivals suggested the waves had traveled through solid material, which transmits seismic P waves faster than liquids.

While looking for these early-arriving signals, the team also picked up a third set of seismic waves, called PKiKP. These are P waves that reflect back to the surface right at the boundary between the inner and outer core. This is the same type of seismic phase that seismologist Inge Lehmann used to reveal the existence of Earth’s solid inner core in 1936.

Finding these PKiKP waves in InSight data offered scientists a strong clue that Mars, too, may have a solid core.

“To me that’s the most exciting thing,” Sun said. “That’s basically saying that you see this inner core structure.”

By measuring the travel times of the seismic phases, Sun’s team estimated that Mars has a solid inner core with a radius of about 613 kilometers—roughly 18% percent of the radius of the planet itself. That ratio is very similar to that of Earth’s inner core, which is about 19% of Earth’s radius, and much larger than many researchers anticipated Mars could have. The new findings were published in Nature.

The team posited that their seismic observations could be explained by an outer core made up mostly of liquid or molten iron and nickel, as well as smaller amounts of sulfur and oxygen, and no more than 3.8% carbon, encasing a solid inner core enriched in more oxygen.

“It’s like Mars has lifted just the corner of its veil and allowed us to peek inside, but only a sneak peek—we could not get the full picture.”

These levels of light elements remain difficult for scientists to explain, Khan said. As light elements prefer to stay liquid, the existence of a solid inner core means that the outer core around it would have to be even richer in light elements than in previous models, which were already pushing the limits of what seemed plausible. On top of that, the building blocks from which scientists think Mars formed don’t contain enough of these elements to account for the abundance required by a solid core, Khan added.

The finding is also at odds with two studies published 2 years ago, one of them led by Khan, that proposed that a layer of molten rock sits at the bottom of the mantle, just above the core, insulating it like a thermal blanket. Such a layer would keep the core hotter, making it more difficult for it to crystallize and solidify.

“It’s like Mars has lifted just the corner of its veil and allowed us to peek inside, but only a sneak peek—we could not get the full picture,” Khan said. “We are not there yet.”

A Hibernating Dynamo

The new finding also renews questions about the absence of a global magnetic field on Mars. Earth’s magnetic field is sustained by the slow crystallization of the core, which drives magnetism-inducing convective motions in the liquid outer core. We know that Mars once had a magnetic field, but it died out billions of years ago.

If Mars does have a solid inner core, why is its magnetic dynamo inactive?

The likely reason is that core crystallization, and thus convection in the outer core, is too slow to power a global magnetic field on Mars, said Douglas Hemingway, a planetary scientist at the University of Texas at Austin and a coauthor of the new study. Mars’s early magnetic field was likely powered by primordial heat escaping from its core. As the planet cooled over billions of years, this convection weakened, and the magnetic field eventually disappeared.

Finding a solid core on Mars, however, opens up the intriguing possibility of a global magnetic field eventually reigniting, Hemingway said. The process of crystallization happens at the boundary of the outer core and the inner core, and if this surface grows larger over time, it could reach a point where there’s enough convective motion to kick-start the dynamo and revive the global magnetic field.

In earlier work, Hemingway predicted that if the Martian core is crystallizing from the center outward, the magnetic field could turn on sometime within the next billion years. “So, you know, if we wait a billion years and it doesn’t happen, then we were wrong,” he joked.

There may be no definitive confirmation of the existence of a solid core on Mars for a long time. The InSight mission ended in 2022, after dust piling up on the lander’s solar panels drained the device’s power supply, and new seismic data from Mars won’t be available for decades, most likely.

“Maybe when we send humans, we would be motivated to bring a few seismometers,” Hemingway said.

—Javier Barbuzano (@javibar.bsky.social), Science Writer

Citation: Barbuzano, J. (2025), Scientists may have finally detected a solid inner core on Mars, Eos, 106, https://doi.org/10.1029/2025EO250367. Published on 1 October 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Unveiling What’s Under the Hood in AI Weather Models

Tue, 09/30/2025 - 13:11
Source: Journal of Geophysical Research: Machine Learning and Computation

Long-term weather forecasting is a difficult task, partly because weather systems are inherently chaotic. Though mathematical equations can approximate the underlying physics of weather, tiny inaccuracies that grow exponentially as a model progresses in time limit most physics-based forecasts to 2 weeks or less.

Estimated values called parameters, which are used to represent the effects of specific physical processes, are important ingredients in these equations. Parameters are inferred by physical data and affect model outcomes by, for example, multiplying or giving different weights to measurements of temperatures, winds, or other factors.

In recent years, artificial intelligence (AI)–based models such as GraphCast and FourCastNet have transformed weather prediction with their ability to learn from large amounts of weather data and produce highly accurate predictions of future weather. However, AI-based models typically contain tens of millions to hundreds of millions of parameters that do not directly translate to underlying physical processes. Because these parameters are not interpretable by researchers, such AI models make only limited contributions to the scientific understanding of weather.

Minor et al. address this limitation by demonstrating the capabilities of a Weak form Scientific Machine Learning (WSciML) algorithm known as Weak form Sparse Identification of Nonlinear Dynamics (WSINDy). Like other AI methods, WSINDy learns from data. But instead of using a highly parameterized approach, it discovers mathematical equations that represent complex, real-world physical processes, such as how air pressure, density, and vorticity interact to determine wind speed and direction.

The researchers applied WSINDy to both simulated and real-world turbulent atmospheric fluid data, which include measurements of temperature, pressure, and wind speed. WSINDy used the artificial data to identify the known equations from the simulation. Most important, WSINDy was also able to successfully identify the governing equations of the known atmospheric physics from a global-scale set of assimilated data incorporating real-world weather observations.

These findings suggest that WSINDy could not only aid in weather forecasting but also help uncover new physical insights about weather, the researchers say. They also note that WSINDy is especially well suited for application to data with high levels of observational noise.

However, further work will be needed to refine WSINDy so it can identify more accurately certain kinds of known atmospheric equations, such as realistic models for atmospheric wind, the researchers say. The algorithm is also being explored for use across a wide range of other scientific areas, including unexplained phenomena in fusion, population behaviors driving epidemics, and communication between cells that leads to collective motion in wound healing. (Journal of Geophysical Research: Machine Learning and Computation, https://doi.org/10.1029/2025JH000602, 2025)

—Sarah Stanley, Science Writer

Citation: Stanley, S. (2025), Unveiling what’s under the hood in AI weather models, Eos, 106, https://doi.org/10.1029/2025EO250365. Published on 30 September 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Spiky Sand Features Can Reveal the Timing of Ancient Earthquakes

Tue, 09/30/2025 - 13:10

Our planet’s tectonic plates have been grinding against and diving below one another since time immemorial. However, the earthquakes that result from all the geological jostling have been actively monitored for less than 2 millennia. Researchers have now proposed how liquefaction features known as sand dikes can be used to both pinpoint and precisely date ancient earthquakes. The team published their findings in Earth and Planetary Science Letters.

The Calling Card of Liquefaction

One of the relatively little known dangers of earthquakes is liquefaction, in which strong shaking causes water-rich sediments to lose their structural integrity and behave almost like a liquid. When the ground is no longer solid, the results can be catastrophic—buildings can tilt substantially or even sink, and buried infrastructure like pipes can rise to the surface.

Liquefaction is therefore one fingerprint of a strong earthquake. And fortunately for researchers hoping to better understand past earthquakes, liquefaction leaves behind a calling card: sand dikes. These subsurface intrusions of fine-grained sediments resemble upward-pointing icicles. Sand dikes form in a matter of seconds when mixtures of sand and water are squeezed into cracks opened by ground shaking and the water later drains away. “They give undisputed evidence that an earthquake has occurred,” said Devender Kumar, a scientist at the National Geophysical Research Institute, a research laboratory of the Council of Scientific and Industrial Research, in Hyderabad, India.

Determining when a sand dike formed would therefore reveal when its parent earthquake occurred. And understanding such timing has long been a research goal, said Kumar. “That’s the most important question we need to answer in paleoseismology.”

“This is the million-dollar question.”

To get a handle on the timing of ancient earthquakes, previous studies turned to radiocarbon dating of organic matter found near sand dikes. But that technique comes with its own uncertainties, said Ashok Kumar Singhvi, a geoscientist at the Physical Research Laboratory in Navrangpura, India, and Shantou University in Shantou, China. It’s impossible to know whether the organic material was laid down contemporaneously with the sand dike and therefore the earthquake, said Singhvi. “This is the million-dollar question.”

Younger, but Why?

Another technique, known as optically stimulated luminescence, can be used to date sand dike sediments directly. This method relies on measuring the energy stored up over time in quartz grains from the natural radioactive decay of elements like thorium, uranium, and potassium. Earlier investigations using optically stimulated luminescence showed that sand dike sediments tend to be younger than their host rocks, a tantalizing clue that the luminescence signals in sand dike sediments could be reset, or zeroed out, by an earthquake. But no one had ever conclusively demonstrated this zeroing out effect.

Anil Tyagi, a physicist also at the Physical Research Laboratory, and his colleagues, including Kumar and Singhvi, set out to do just that. Heat, light, and pressure can all reset a material’s luminescence signal, the team knew. But sand dikes form underground, meaning light couldn’t be the culprit, and in sediments that are too soft to generate sufficient pressure, Tyagi and his collaborators concluded. That left heat.

Using a theoretical model developed in the 1970s, the researchers calculated the increase in temperature associated with the formation of a sand dike. Heating occurs simply because of friction, said Kumar: Sediment grains run into each other as they pour upward into a crack in excess of several tens of meters per second. The team estimated that temperatures of up to 450°C were attainable, particularly in the centers of dikes, where sediment grains would be inflowing the fastest.

Tyagi and his colleagues experimentally verified that temperature estimate by analyzing sediment samples taken from five sand dikes in northeastern India. The team calculated that most of the samples had experienced heating to at least 350°C. Such temperatures are sufficient to reset the luminescence signal of quartz grains, earlier work has shown.

“We have a direct method to date sand dikes, and hence past earthquakes.”

These findings demonstrate that quartz grains do indeed zero out their ages when sand dikes form. That fact makes sand dikes valuable and accurate tracers of past ground shaking, said Singhvi. “We have a direct method to date sand dikes, and hence past earthquakes.”

These results are convincing and pave the way for paleoseismological investigations, said Naomi Porat, a luminescence dating scientist who recently retired from the Geological Survey of Israel and who was not involved in the research. In 2007, Porat and her colleagues published a paper that suggested that sand dikes’ luminescence signals were being reset, but the team didn’t posit a mechanism. “We left it as an open question,” said Porat. “It’s so nice to see this paper,” she added. “I waited for 20 years.”

—Katherine Kornei (@KatherineKornei), Science Writer

Citation: Kornei, K. (2025), Spiky sand features can reveal the timing of ancient earthquakes, Eos, 106, https://doi.org/10.1029/2025EO250364. Published on 30 September 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Small Satellites, Big Futures

Mon, 09/29/2025 - 13:15
CubeSats on The Rise

When Devin Phyllides graduates from college next year, she’ll be able to boast something few students can: She’ll have helped launch a satellite into space.

“It’s probably my favorite job I’ve ever had,” she said.

Phyllides, a senior undergraduate physics student at the University of New Hampshire (UNH) in Durham, is a research assistant for the 3UCubed CubeSat project, a collaboration between UNH, Howard University in Washington, D.C., and Sonoma State University in Rohnert Park, Calif.

CubeSats are small satellites first developed in 1999 as a platform for education and space exploration. They are measured in “units” or U, where a 1U CubeSat is a cube measuring 10 centimeters per side. A 2U CubeSat is equivalent to two 1U CubeSats stacked together, a 3U CubeSat is three cubes stacked together, and so on. The NASA-funded 3UCubed satellite is the size of a 1-quart milk carton.

Dozens of people from the three universities have helped design, build, and test the satellite ahead of its planned October 2025 launch, and most of them are university students.

“The goal from the get-go of this CubeSat is to give students hands-on experience, not just in building…but in the full life cycle of a mission,” said Noé Lugaz, a space scientist at UNH and colead of the 3UCubed project.

Students around the world—high schoolers, undergraduates, and graduate students—have participated in CubeSat missions. Student-focused satellite programs not only provide important science in multiple fields but also inspire and engage the next generation of space scientists and engineers.

Why Build a CubeSat?

An entire aerospace industry has been developed around CubeSats, but the tiny satellites also remain a cornerstone of science, technology, engineering, and mathematics (STEM) education for all ages. Today, many of the satellites’ components, such as the chassis, navigation systems, cameras, and scanners, can be purchased off-the-shelf, and most don’t require advanced technical skills to assemble.

Building a CubeSat “still provides the challenge of putting everything together, and making sure the software works, and making sure that it does exactly what needs to be done,” said Floor Bagchus, a master’s student in aerospace engineering and the educational manager for the Da Vinci satellite at Delft University of Technology (TU Delft) in the Netherlands. But because so many of the components come ready to install, “it’s really a very accessible way for engineering students to learn how to make an actual satellite,” she said.

Some CubeSats are space-ready and are launched into orbit or released from the International Space Station. Others are not designed to leave the atmosphere and are lofted by atmospheric balloons for a short time before descending. Their small size and light weight make CubeSats ideally suited for doing science in the upper atmosphere or in low Earth orbit, such as studying Earth’s magnetosphere, atmosphere, and surface conditions.

CubeSats aren’t the only type of small, budget-friendly space mission in the game, Lugaz said, but in his opinion, they offer the most science per dollar and a realistic space mission experience.

In comparison with a CubeSat, “a balloon, for example, would be cheaper, faster, and maybe scientists can do faster turnover and reach more students,” Lugaz said. “A CubeSat is obviously a longer program. But the positive side of this is that the science you can do with a CubeSat is much more [varied and] is also better training for some of the jobs in industry.”

Their size also helps make the idea of space and satellites approachable, especially for younger students, Bagchus said.

“I think people are a bit scared of space, and teachers are scared of space, because they think that space is so gigantic, dark, vast, and complex,” she said. “How can you make sense of such a difficult thing? How can you make students not be so scared of it, and show them that you can actually work in space, do things in space, and overcome very difficult hurdles by very basic principles? I think it’s a very important thing to do in primary schools and high schools to show that you can actually do challenging things.”

Building STEM Pathways for High Schoolers

The simplicity of a CubeSat means that students with limited or no technical experience can learn how to select the satellite components, install the scientific payloads and navigation systems, design the software, and analyze data. In this way, CubeSats can be an entry point into STEM careers.

“I never really thought I’d be able to say that I launched a satellite to space in my high school years.”

In 2022, the Israel Space Agency launched the TEVEL CubeSat constellation, a program designed to provide high school students with a chance to build and launch satellites. Avigail Anidjar learned about the program when she was in eighth grade. When she started at Ulpanat AMIT Givat Shmuel High School near Tel Aviv the following year, she was excited to learn that the school was participating in the program’s second iteration. She joined TEVEL 2 in 2023, at 15 years old.

“I never really thought I’d be able to say that I launched a satellite to space in my high school years,” Anidjar said.

TEVEL 2 gave nine teams of Israeli high school students the opportunity to build and launch a 1U CubeSat. Building a satellite exposes students to an array of STEM fields, including atmospheric science, computer science, engineering, physics, and robotics.

The child of two engineers, Anidjar had taken introductory classes in physics and coding. Still, she learned a lot of hands-on skills in data analysis, computer programming, and problem-solving while working on her school’s CubeSat.

“I’ve always known that I want to go into this kind of field…but now I know that dealing with more space things and satellites is something that’s very interesting, and maybe I want to focus more on that,” she said.

Nine TEVEL 2 satellites, one from each participating school, launched in March 2025 and will operate together to measure the flux of high-energy particles and solar cosmic rays over roughly the next 2 years. The satellites also feature a transponder for ham radio communication.

Anidjar said the launch was “really stressful” but also very rewarding. “We saw our whole work actually come to life. And after a few days, we also got a beacon from it [showing] that it actually works and that it’s alive, and not just a piece of metal in space. It was really exciting.”

Anidjar recently graduated but remains on the satellite’s data analysis team.

Student-built CubeSats can be a tool for educational empowerment, said Maryam Sani, a STEM educator and advocate, and the education lead for the Space Prize Foundation, a U.S.-based nonprofit dedicated to promoting space education and innovation.

In October 2024, Space Prize sponsored the NYC CubeSat challenge, during which 38 high school– and college-age girls and gender minority students from Colombia, Saudi Arabia, the United Kingdom, and the United States spent three intensive days in New York City (NYC) learning what it takes to create a satellite.

The students were split into teams with others they had never met. Some had interest and prior knowledge about space or engineering from school or programs such as Space Camp. Others joined out of curiosity.

“And that was brilliant,” Sani said. “To quote one student, she said, ‘I just thought it would be something nice to do…I can’t believe how much I learned and how much this has made me more interested in finding out about the space industry.’ Which is exactly what we wanted.”

Participants of the 2024 Space Prize NYC CubeSat challenge gather data from their satellites—and pose for photos—while standing on the deck of the museum ship USS Intrepid in New York City. Credit: Space Prize

Throughout the program the students learned basic physics, circuitry, and coding. Each team brainstormed a problem in New York City that a CubeSat could help solve, designed the system, and then built it. Weather prevented the launches, but the participants collected and analyzed data from their creations on the ground.

Sani said that some of the students from Saudi Arabia extended their project after they went home, eventually launching their CubeSat and incorporating the data into an undergraduate project for electrical and computer engineering degrees.

The CubeSat challenge “was a surreal experience,” wrote one participant in her feedback form. “It made me feel more confident that being a woman in STEM was a possibility.”

Space for All

CubeSats can lower the barrier to entry for students around the world who can’t join rocketry programs or other STEM opportunities. CubeSat education programs can foster international participation and collaboration in science, even when pandemic lockdowns prevent in-person meetups.

In 2021, FIRST Global, a U.S.-based nonprofit that promotes international youth STEM education and engagement, hosted a CubeSat Prototype Challenge that enabled students from 176 countries to build and launch CubeSats. Among its initiatives, FIRST Global has organized annual Olympic-style robotics competitions for national youth teams since 2017. The competitions are typically held in-person, but the COVID-19 pandemic prevented the 2021 gathering. Organizers realized that a CubeSat challenge, which they had never done before, could be the answer if it were held remotely.

“We’re trying to connect the world, but we couldn’t do that physically,” said Matt Stalford, the communications director of FIRST Global. “We certainly could do that symbolically, and CubeSats were a huge part of that.”

Each FIRST Global CubeSat challenge team received a standardized CubeSat prototype assembly kit, from which they built their CubeSats. Credit: FIRST Global

For the challenge, each national team—made up mostly of teenagers—defined a mission of importance to their community and designed a CubeSat to collect the data needed to solve it. For example, team Japan studied residual airborne radiation near the Fukushima nuclear site, Team Seychelles collected environmental data to improve local weather forecast accuracy, and Team Argentina studied how local atmospheric conditions obstruct radio transmissions. FIRST Global shipped each team a standardized CubeSat prototype assembly kit.

“Then they had to do the hard part of building it, launching it, taking that data, and writing a report on what that data produced,” Stalford said. Using balloons, the teams launched 90 CubeSats into the lower atmosphere.

Stalford said that asking students to design a satellite that could help solve a problem in their community made the CubeSat challenge more meaningful to them.

“Kids were built to care about the world,” he said. “When you can spark the imagination, when you can get them asking questions like, ‘How can I be part of the solution?’, that’s where kids come alive, and that’s how you spark that love of STEM.”

Reaching Even More Students

FIRSTGlobal’s CubeSat Prototype Challenge inspired the creation of other CubeSat programs, including one run by STEMbees in Accra, Ghana. STEMbees is a nonprofit organization whose mission is to increase the visibility and participation of girls and women in STEM in Ghana and to close the STEM gender gap across Africa.

A STEMbees expert mentored the eight girls from Team Ghana in the 2021 FIRST Global challenge. Team Ghana members built and launched their CubeSat during the challenge and wanted to launch another one after the contest ended. They took their blueprints, customized them with 3D printing, and built a new one. The group went to nearby Academic City University for a launch that attracted the attention of the university students and local community.

“We saw the impact that it created,” said Benedict Amoako, a robotics engineer and STEM instructor with STEMbees. “We had basically half the university students come out to see what these high school girls were trying to do on their large football pitch, and [they] were very impressed.”

Seeing the success of Team Ghana’s second launch made the STEMbees team want to expand its CubeSat program to reach even more students across Ghana, Amoako said. The organization partnered with AIMS Ghana and the U.S. Embassy in Ghana to create the Infinity Girls in Space Project. By August 2023, more than 110 girls from 37 schools across the country had learned about and helped build CubeSat prototypes.

Aerospace engineering and satellite imagery analysis are not commonly taught in primary or high school in Ghana, explained Lady-Omega Hammond, STEMbees product and start-up growth strategist. Unless a student goes into one of a few specific careers—for example, the military, telecommunications, or land surveying—“you might not find yourself, as a young person, wanting to think about what’s going on beyond the skies,” Hammond said. “CubeSats gave us a very interesting angle to pique the interest.”

As part of the Infinity Girls in Space Project, cohorts of high school girls across Ghana build and launch CubeSat prototypes. Credit: STEMbees

During Infinity Girls in Space, STEMbees provided CubeSat prototype training modules, lesson plans, assembly kits, and technical resources to teachers and students at more than 3 dozen high schools across Ghana. Students learned 3D printing, satellite assembly, coding, and basic physics and atmospheric science. Cohorts from several nearby schools, joined in person by STEMbees experts, worked together for the builds and launches. The eight cohorts lofted 10 CubeSat prototypes into the lower atmosphere by balloon, and they collected images and basic atmospheric readings before their teams retrieved them.

Although some students struggled initially because the concepts were new to them, “I think it all came together when they were working as a team” and supporting one another through the learning process, Amoako said.

“The pride and joy that you see when the parents are coming to see the end result of what their girls have created is always very heartwarming.”

“The pride and joy that you see when the parents are coming to see the end result of what their girls have created is always very heartwarming,” Hammond said. Some participants have graduated and gone on to study engineering.

At the university level, students who participate in CubeSat missions can explore more complex technical and science skills such as payload design, spacecraft assembly, launch testing, and data pipeline development. They can then leverage this hands-on experience into academic or aerospace industry jobs. Postdoctoral researchers and senior graduate students gain experience mentoring newer team members and also experience a space mission’s life cycle.

3UCubed has been in development for several years. After launch into low Earth orbit, the satellite will measure how particle precipitation affects the polar thermosphere and the lifetime of satellites at this altitude.

To date, 68 undergraduate students and two graduate students have been part of the 3UCubed team. They have gone through all stages of mission development, Lugaz said, from concept and design reviews, to building, programming, and testing. After launch, students will be involved with collecting and analyzing data and publishing the results.

The 3UCubed satellite, shown here in an artist’s rendering, is only 30 centimeters long. Credit: University of New Hampshire Teaching Future Teachers

TU Delft’s Da Vinci CubeSat offers those same experiences and skill development opportunities to its student team members, Bagchus said, and it also provides opportunities for those who want to become STEM educators.

“The goal of the satellite is, very simply put, purely educational,” Bagchus said. “We want to provide STEM education to inspire the future generation for STEM and also make them aware that space is literally all around them.”

Da Vinci is planned to launch in 2027 through a partnership with the European Space Agency. The 2U CubeSat will have two educational payloads: one geared toward primary schoolers and one for secondary schoolers. The team is writing and testing free lesson modules for each payload so that teachers and independent learners around the world can learn from the satellite. Members of the satellite team who want to become teachers themselves are gaining experience in developing lesson plans that incorporate satellite technology.

We asked the students, ‘What would you like to do in space?’ And the answer was, ‘I want to play in space.’”

“We did a primary school competition, and we asked the students, ‘What would you like to do in space?’” Bagchus explained. “And the answer was, ‘I want to play in space.’”

The team designed a payload that will allow students to roll dice in space. The satellite will send them pictures and videos of the dice rolling, so they can make statistical calculations and play chance games. The design involved figuring out the technical aspects of controlling a space-based dice roll from the ground and delivering the results in a way that’s accessible to primary schoolers.

The payload for secondary schoolers teaches them about how radiation in the space environment degrades digital photos when cosmic radiation strikes a pixel. One lesson plan for this payload guides students in developing computer code to restore image quality, similar to the Hamming codes used to process space telescope images—another practical lesson for students interested in space science.

The lesson plans and master classes for both modules will be available in-person and virtually.

“Not all people have the same access to education or can have their true potential achieved through education, because of where they were born, or maybe some personal issues they are facing,” Bagchus said. The Da Vinci satellite is “a beautiful initiative to at least try to help a little bit in that aspect.”

The Da Vinci CubeSat will have an educational payload tailored for primary school students that will allow them to roll dice in space. Credit: Da Vinci Satellite/TU Delft Launching into the Future

Some CubeSat prototypes are quick to develop. Others take years to complete. Case studies have found that lack of student training, time commitment constraints, and turnover from graduation can be challenges to CubeSat programs with longer lifespans. But using prototype kits and satellite simulators as well as dedicating time to hands-on training can overcome time and training issues, and turnover can provide an opportunity to get more students involved.

“You don’t find this in your everyday secondary school or even in university.”

“You don’t find this in your everyday secondary school or even in university,” Hammond said. The long-term influence of a CubeSat on its student team members might not be immediately clear, she said, “but I believe in a couple of years, it will definitely influence their thinking into why they chose a career in STEM or not.”

Phyllides, who joined 3UCubed last year, said she got involved with the program through a friend and had no experience with satellites when she started. Now, after more than a year calibrating the onboard instruments and analyzing test data, she’s eagerly awaiting the satellite’s launch.

“I want to see if the code that I’ve been writing will work and actually show our data,” she said. She hopes to analyze 3UCubed data as part of her senior project. “That would be like a huge, huge goal of mine.”

Last year, she presented on the 3UCubed mission at AGU’s annual meeting and found networking with other students involved with space missions to be a valuable experience. She’s still figuring out what she wants to do after graduation, but her work with 3UCubed has expanded her horizons.

“It’s really, really awesome,” she said. “I’m very, very lucky.”

—Kimberly M. S. Cartier (@astrokimcartier.bsky.social), Staff Writer

Citation: Cartier, K. M. S. (2025), Small satellites, big futures, Eos, 106, https://doi.org/10.1029/2025EO250359. Published on 29 September 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Squaring Up in Space

Mon, 09/29/2025 - 12:56
CubeSats on The Rise

CubeSats, those boxy satellites that float above Earth alone or in miniature constellations, are emerging as little engines (without engines) of accessible education and affordable engineering.

“The goal from the get-go” of CubeSat education programs “is to give students hands-on experience, not just in building…but in the full life cycle of a mission,” says space scientist Noé Lugaz in Kimberly Cartier’s forward-looking feature “Small Satellites, Big Futures.” Such programs have reached those goals with successfully launched missions designed by STEM students from Saudi Arabia to Seychelles. And international CubeSat projects (as well as readily available hardware and innovative engineering) have expanded career opportunities for budding space scientists from Africa to Southeast Asia.

Other goals of CubeSat programs include the pursuit of economic and ecological sustainability. Wooden satellites, like the ones profiled in Grace van Deelen’s “A New Satellite Material Comes Out of the Woodwork,” might just do the trick.

In more terrestrial matters, a scientist-authored opinion considers the implications of land management in the Himalayas in “Beyond Majesty and Myths: Facing the Realities of Mountainside Development.”

This month’s articles offer a good reflection of Earth and space scientists in these uncertain times: excavating down-to-earth opportunities, reaching for the stars. I think I can, I think I can…

—Caryl-Sue Micalizio, Editor-in-Chief

Citation: Micalizio, C.-S. (2025), Squaring up in space, Eos, 106, https://doi.org/10.1029/2025EO250358. Published on 29 September 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

New Evidence for a Wobbly Venus?

Mon, 09/29/2025 - 12:00
Editors’ Highlights are summaries of recent papers by AGU’s journal editors. Source: AGU Advances

Only big impactors can punch their way through Venus’s atmosphere. When they do, the impact lofts dust which is then blown downwind as it drifts back to the surface. The resulting parabola-shaped dust deposits are unique to Venus and indicate the wind direction at the time of impact.

In a clever study, Austin et al. [2025] show that the parabolas that appear oldest and most degraded depart most strongly from the expected wind direction. This suggests that wind directions on Venus have changed over time – but why? Because of Venus’s slow spin, its rotation axis is unstable. The authors suggest that the parabolas are recording winds from a period when Venus’s rotation axis was somewhere else. Future Earth-based or spacecraft observations might be able to test this theory.

Citation: Austin, T. J., O’Rourke, J. G., Izenberg, N., & Silber, E. A. (2025). Survey and modeling of windblown ejecta deposits on Venus. AGU Advances, 6, e2025AV001906. https://doi.org/10.1029/2025AV001906

—Francis Nimmo, Editor, AGU Advances

Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer