EOS

Syndicate content Eos
Science News by AGU
Updated: 1 day 1 hour ago

Zircon Crystals Could Reveal Earth’s Path Among the Stars

Fri, 10/10/2025 - 12:53

Tiny crystals in Earth’s crust may have recorded meteorite and comet impacts as our planet traveled through the spiral arms of the Milky Way over more than 4 billion years, according to new research.

The study is one of the first to suggest that galactic-scale processes can affect Earth’s geology, and researchers think similar evidence might be found on other bodies in the solar system, including the Moon and Mars.

“This is something that could connect the Earth, the Moon, and Mars into the wider galactic surroundings.”

“This is so interesting and exciting—we are potentially seeing something that is not just unique to Earth,” explained geologist Chris Kirkland of Australia’s Curtin University, the first author of the new study published in Physical Review Research. “This is something that could connect the Earth, the Moon, and Mars into the wider galactic surroundings.”

Kirkland and his coauthor, University of Lincoln astrophysicist Phil Sutton, studied changes in oxygen isotopes in a database of tens of thousands of dated crystals of zircon—a silicate mineral with the chemical formula ZrSiO4 that is common in Earth’s crust. They compared their findings to maps of the Milky Way galaxy that show its neutral hydrogen, or H1.

H1, with one proton and one electron, is the most abundant element in the universe, and its density is particularly high in the arms of the Milky Way galaxy.

Because they are almost exactly the same size, uranium atoms sometimes replace the zirconium atoms in zircon. Uranium radioactively decays into lead over time, so geologists can study the levels of uranium and lead isotopes in zircon crystals to determine when the crystals formed, sometimes in the first phases of the evolution of Earth’s crust about 4.4 billion years ago.

“Zircon crystals are a geologist’s best friend…we can get a lot of information from a single zircon grain.”

“Zircon crystals are a geologist’s best friend,” Kirkland said. “They have an inbuilt clock, and they carry a chemical signature that tells us how they formed—so we can get a lot of information from a single zircon grain.”

Queen’s University geochemist Christopher Spencer, who was not involved in the study, said that the work was fascinating and provocative. “I think the study is a reminder that Earth does not evolve in isolation and that interdisciplinary thinking, however speculative at first, can open up new ways of framing questions about our planet’s history.”

Oxygen Isotope Ratios

The key to the latest research was in the ratios of isotopes—forms of the same chemical element that have different numbers of neutrons—in the oxygen atoms of zircon’s silicate group.

The relative levels of oxygen isotopes in samples of zircon crystals can tell geologists whether the crystals formed high in the crust, perhaps while interacting with water and sediments, or deeper within Earth’s mantle.

Kirkland said the latest study examined the distribution of the ratios of oxygen isotopes found in a dataset of zircon crystals sampled from around the world. The scientists evaluated the data’s “kurtosis,” or the measure of how flat or peaked a distribution is. A dataset with high kurtosis has a narrow distribution, with most values occurring in the middle and causing a sharp peak in the distribution curve. In contrast, a dataset with low kurtosis has a wide distribution with more high and low values, causing a wider distribution curve with a less pronounced peak.

The researchers determined that periods of high oxygen isotope kurtosis corresponded to times when our solar system was crossing the dense spiral arms of the Milky Way galaxy. Such crossings occurred roughly every 187 million years on average during our solar system’s 748-million-year orbit around the galactic center at a speed of about 240 kilometers per second.

In addition to H1, the spiral arms are filled with many more stars than the interstellar space between them. The gravity of those stars seems to have disturbed the Oort Cloud—the haze of billions of icy rock fragments that surrounds our solar system. That, in turn, caused more meteors and comets to strike Earth as it passed through the galactic arms, leading to the subsequent melting of the crust in many places, Kirkland said. “By looking at the variability of the [zircon] signal over time, we were able to get an indication of how different the magma production on the planet was at that time.”

Professor Chris Kirkland uses an ion microprobe to date zircon mineral grains. Credit: C. L. Kirkland

He warned that correlation does not mean causation but said that in this case there seemed to be no other plausible cause for the periodic kurtosis of the oxygen isotope ratios in zircons. “It is very important that we are able to see the frequency of [meteor and comet] impacts” on Earth, Kirkland said. “Rather than an internal process, we seem to be looking at an external process.”

Some other experts suggest the new study is notable for outlining the concept that galactic processes could have left geological traces, but it is not yet conclusive proof.

Earth scientist Craig Storey of the University of Portsmouth in the United Kingdom, who was not involved in the new study, said crustal melting did not necessarily prove an increase in meteorite or comet impacts. Instead, natural processes here on Earth, such as volcanic or tectonic movements, could have caused melting of the crust at several stages of our planet’s geological history.

He is also concerned that some of the proposed correlations in the study may not be correct. “It is an interesting idea, and there are potentially ways to test it, but I don’t think this is the way to test it,” Storey said.

—Tom Metcalfe (@HHAspasia) Science Writer

Citation: Metcalfe, T. (2025), Zircon crystals could reveal Earth’s path among the stars, Eos, 106, https://doi.org/10.1029/2025EO250379. Published on 10 October 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

New 3D Model Reveals Geophysical Structures Beneath Britain

Fri, 10/10/2025 - 12:53
Source: Journal of Geophysical Research: Solid Earth

Magnetotelluric (MT) data, which contain measurements of electric and magnetic field variations at Earth’s surface, provide insights into the electrical resistivity of Earth’s crust and upper mantle. Changes in resistivity, or the ability to conduct an electrical current, can indicate the presence of geologic features such as igneous intrusions or sedimentary basins, meaning MT surveys can complement other kinds of geophysical surveys to help reveal Earth’s subsurface. In addition, such surveys can play an important role in improving understanding of the risks space weather poses to human infrastructure.

Montiel-Álvarez et al. present the first 3D electrical resistivity model of Britain, based on long-period MT data (using measurements gathered every second for 4–6 weeks at a time) from across the island. Their model, called BERM-2024, points to previously recognized as well as likely new tectonic and geological structures. The authors also model the effects of a recent solar storm on Earth’s geoelectric field, validating the usefulness of MT-based approaches for space weather impact forecasting.

The BERM-2024 electrical resistivity model is based on MT data from 69 sites in Britain, including both new and legacy datasets. Creating the final model involved processing the raw time series data and accounting for the “coastal effect” caused by the conductivity of ocean water when inverting the data—or calculating causes based on observations.

Sensitivity tests of the new model indicate it resolves features to depths of 200 kilometers (125 miles), including many known from other geophysical surveys and geological observations. It also reveals new anomalies, including highly conductive areas under Scotland’s Southern Uplands Terrane and a resistive anomaly under the island of Anglesey. More intriguing, a large, previously unknown conductive anomaly appears in their model between 85 and 140 kilometers (52–87 miles) beneath the West Midlands region.

The authors tested the utility of their resistivity model for estimating the electric field at Earth’s surface, which is key in forecasting the effects of geomagnetically induced currents caused by space weather. To do so, they obtained a time series of the horizontal electric field across Britain during a solar storm that occurred on 10–11 October 2024, which led to bright displays of aurora borealis across the Northern Hemisphere. They found good agreement between their modeled time series and those measured at observatories, indicating that electrical resistivity models are a tool that can provide accurate information for space weather impact planning. (Journal of Geophysical Research: Solid Earth, https://doi.org/10.1029/2025JB031813, 2025)

—Nathaniel Scharping (@nathanielscharp), Science Writer

Citation: Scharping, N. (2025), New 3D model reveals geophysical structures beneath Britain, Eos, 106, https://doi.org/10.1029/2025EO250381. Published on 10 October 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Pinpointing Sewage Seeps in Hawaii

Thu, 10/09/2025 - 13:09

In Hawaii, most of the population relies on private septic tanks or cesspools to dispose of sewage and other wastewater. There are more than 88,000 cesspools in the state, with about 55,000 on the Big Island alone. These systems, as opposed to more strictly regulated municipal wastewater treatment units, have a higher risk of sewage leaking into the porous substrate.

A recent study published in Frontiers in Marine Science identifies sewage-contaminated submarine groundwater discharge (SGD) sites, pinpointing specific locations that stakeholders may want to prioritize for mitigation efforts.

Modeling and Mapping

Previous studies estimated that groundwater flows deliver 3 to 4 times more discharge to oceans than rivers do, making them significant pathways for transporting pollutants.

In response to pollution concerns from the local community, a team from Arizona State University, with the support of the Hawaiʻi Marine Education and Research Center, used airborne mapping to identify locations where SGD reached the ocean along the western coastline of the Big Island.

Sewage-contaminated water (colored blue in this photograph) enters the ocean from submarine groundwater discharge sites on the Kona coast of the Big Island. Credit: ASU Global Airborne Observatory

To precisely identify these freshwater-seawater interfaces, researchers built on previous studies that used thermal sensors to capture the temperature difference between the two bodies of water. Figuring out which of these discrete interface points were problematic “was very challenging,” said Kelly Hondula, a researcher at the Center for Global Discovery and Conservation Science and first author of the study.

The team identified more than 1,000 discharge points and collected samples from 47 locations. “We chose points where we could localize freshwater emerging from the land or points of high community interest,” explained Hondula.

In addition to aerial surveys, researchers analyzed the discharge points by monitoring their salinity gradients and measuring levels of Enterococcus, a group of bacteria that frequently serve as key fecal indicators in public health testing. They integrated these data into a statistical model that used upstream land cover and known sewage sites to predict the likelihood of sewage and bacterial contamination for each SGD site along the western Hawaiʻi coastline.

The techniques allowed scientists to identify regions of the built environment that are associated with contamination. Besides areas with septic systems and cesspools, they found a high correlation between sewage discharge and development within the first 500 meters of the coast.

“Sewage going into the ground comes out in the ocean, with often a worrying level of waste contamination.”

The geology of a discharge point also contributes to its risk of contamination. Discharge points around the island’s South Kona region, for instance, feature “some of the youngest and most porous volcanic substrate in the archipelago, with little soil development and a high degree of hydrologic connectivity between point sources of pollution and coastal waters,” the authors wrote. Although South Kona has relatively sparse development, increased land use will likely have a disproportionate effect on groundwater quality, they concluded.

“We were surprised to find such clear results: Sewage going into the ground comes out in the ocean, with often a worrying level of waste contamination,” said Hondula.

Mapping Mitigation

As communities continue to invest in coastal development, understanding the effect of sewage discharge and how to avoid it is becoming an increasingly pressing concern worldwide.

As such, the new study “contributes to the growing body of evidence correlating sewage-tainted groundwater discharge with coastal water quality, showing a strong linkage between wastewater and development in the nearshore area. That’s something that land managers and conservation scientists should really take into account,” said Henrietta Dulai, a geochemist at the University of Hawaiʻi at Mānoa who was not involved in the study.

The state of Hawaii has recognized the particular risk posed by largely unregulated cesspools leaking sewage-contaminated groundwater to the ocean. In fact, there is a state mandate to eliminate cesspools by 2050, but the associated cost is slowing the process.

Many scientists say the costs of phasing out cesspools is far outweighed by the health benefits. “We need to consider the financial sides of replacing cesspools versus the benefit of preserving the water quality for the environment and the people,” said Tristan McKenzie, a researcher at the University of Gothenburg, Sweden, who was not involved in the study. “Studies like this highlight why we need to act now.”

—Anna Napolitano (@anna83nap; @anna83nap.bsky.social), Science Writer

Citation: Napolitano, A. (2025), Pinpointing sewage seeps in Hawaii, Eos, 106, https://doi.org/10.1029/2025EO250376. Published on 9 October 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

A Step Toward AI Modeling of the Whole Earth System

Thu, 10/09/2025 - 13:08
Source: Journal of Geophysical Research: Machine Learning and Computation

Modelers have demonstrated that artificial intelligence (AI) models can produce climate simulations with more efficiency than physics-based models. However, many AI models are trained on past climate data, making it difficult for them to predict how climate might respond to future changes, such as further increases in the concentration of greenhouse gases.

Clark et al. have taken another step toward using AI to model complex Earth systems by coupling an AI model of the atmosphere (called the Ai2 Climate Emulator, or ACE) with a physical model of the ocean (called a slab ocean model, or SOM) to produce a model they call ACE2-SOM. They trained ACE2-SOM on output of a 100-kilometer-resolution physics-based model from a range of climates.

In response to increased atmospheric carbon dioxide, consistent with its target model, ACE2-SOM predicted well-known responses, such as surface temperature increasing more strongly over land than over ocean, and wet areas becoming wetter and dry areas becoming drier. When the researchers compared their results with those of a 400-kilometer-resolution version of the physics-based model they were emulating, they found that ACE2-SOM produced more accurate and cost-effective predictions: ACE2-SOM used 25 times less power while providing a resolution that was 4 times finer.

But ACE2-SOM struggled when the researchers asked it to predict what would happen if atmospheric carbon dioxide levels rose rapidly (suddenly quadrupling, e.g.). While the ocean surface temperature took the appropriate time to adjust, the atmosphere almost immediately shifted to the equilibrium climate under the new carbon dioxide concentration, even though physical laws would dictate a slower response.

To become fully competitive with physics-based models, AI climate models will need to become better able to model unusual situations, the authors write. The slab ocean model used in this study is also highly simplified. So to maintain their efficiency advantage while improving realism, AI models will also need to incorporate additional parts of the Earth system, such as ocean circulation and sea ice coverage, the researchers add. (Journal of Geophysical Research: Machine Learning and Computation, https://doi.org/10.1029/2024JH000575, 2025)

—Saima May Sidik (@saimamay.bsky.social), Science Writer

Citation: Sidik, S. M. (2025), A step toward AI modeling of the whole Earth system, Eos, 106, https://doi.org/10.1029/2025EO250362. Published on 9 October 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

La salinidad del Océano Austral podría estar desencadenando la pérdida de hielo marino

Thu, 10/09/2025 - 13:08

This is an authorized translation of an Eos article. Esta es una traducción al español autorizada de un artículo de Eos.

El Océano Austral existe en un estado de equilibrio precario. El mar está estratificado, con agua fría en la superficie y agua relativamente cálida debajo. Es una situación inherentemente inestable — en igualdad de condiciones, el agua cálida debería subir a la superficie. Pero es más salada y, por lo tanto, más densa, por lo que permanece en el fondo. La capa superior fría, en cambio, se mantiene más dulce con las nevadas y el hielo marino, que se forma cerca de la costa y luego se desplaza hacia el norte entrando al océano abierto antes de derretirse.

Durante los últimos diez años, la capa de hielo marino ha ido disminuyendo a medida que las temperaturas oceánicas se han calentado. El rápido deshielo ha aportado aún más agua dulce a la superficie, lo que debería reforzar la capacidad aislante de la capa de agua fría y permitir que el hielo marino vuelva a expandirse.

Sin embargo, ese ciclo de retroalimentación parece haberse interrumpido. Nuevos datos satelitales han revelado que el océano alrededor de la Antártida, contra todo pronóstico, se está volviendo más salado.

El estudio fue publicado en Proceedings of the National Academy of Sciences of the United States of America (PNAS).

Medir donde es difícil medir

El hielo marino, los mares agitados y la oscuridad permanente hacen que resulte prácticamente imposible monitorear la salinidad del Océano Austral desde un barco durante el invierno. Solo en años recientes ha sido posible medir la salinidad del océano Austral desde el espacio. Los satélites pueden observar la temperatura de brillo de la superficie oceánica, una medida de la radiación emitida en la superficie del océano. Cuanto más dulce es el agua, mayor es la temperatura de brillo.

La técnica funciona bien en aguas más cálidas, pero en aguas frías la temperatura de brillo no varía tanto como cambia la salinidad. Dado que estos cambios ya son, en general, bastante sutiles, los satélites no habían podido detectarlos con precisión en las regiones polares. En estas zonas, el hielo marino también tiende a nublar la señal.

Los avances recientes en tecnología satelital, sin embargo, han mejorado notablemente la sensibilidad de las lecturas de brillo, y los nuevos algoritmos permiten a los investigadores eliminar el ruido generado por el hielo marino.

El oceanógrafo Alessandro Silvano, de la Universidad de Southampton, y sus colegas analizaron los últimos 12 años de registros de salinidad del satélite de la Agencia Espacial Europea para la medición de la humedad del suelo y la salinidad oceánica (SMOS, por sus siglas en inglés). Para Alex Haumann, científico climático de la Universidad Ludwig-Maximilians de Múnich, Alemania, e integrante del equipo, contar con estos datos de amplio alcance — que cubren todo el Océano Austral con una resolución de 25 kilómetros cuadrados — representa un cambio revolucionario. “Debido a la gran cobertura y la serie temporal que puedes obtener, es super valioso. Es realmente una nueva herramienta para monitorear este sistema”, afirmó.

Con el calentamiento, esperamos que fluya más agua dulce hacia el océano. Por lo tanto, es bastante impactante que aparezca esta agua más salada en la superficie”

Sin embargo, cuando el equipo observó que la salinidad había aumentado durante ese periodo, no pudieron evitar cuestionar la tecnología. Para verificar lo que estaban observando, recurrieron a las boyas Argo, boyas automatizadas que toman muestras de agua a una profundidad de hasta 2000 metros. Una red de boyas flota en los mares del mundo, incluido el océano Austral.

Para sorpresa y consternación de Silvano, las boyas corroboraron los datos satelitales. “Muestran la misma señal”, dijo. “Pensamos, de acuerdo, esto es real. No es un error.”

Al comparar los datos sobre la salinidad con las tendencias del hielo marino, el equipo observó un patrón inquietante. “Existe una correlación muy alta entre la salinidad superficial y la capa de hielo marino”, explicó Haumann. “Cuando la salinidad es alta, el hielo marino es escaso. Cuando la salinidad es baja, hay más hielo marino.”

“Con el calentamiento, esperamos que fluya más agua dulce hacia el océano. Por lo tanto, es bastante sorprendente que aparezca esta agua más salada en la superficie”, afirmó Inga Smith, física especializada en hielo marino de la Universidad de Otago en Nueva Zelanda, que no participó en la investigación.

Un régimen cambiante

La explicación más plausible para el aumento de la salinidad, según Silvano, es que las delicadas capas de agua antártica se han alterado y el agua más cálida y salada que se encuentra debajo está ahora saliendo a la superficie, lo que hace que esta sea demasiado cálida para que se forme hielo marino.

Aunque subrayó que es demasiado pronto para determinar la causa de la surgencia, Silvano planteó que podría estar provocado por el fortalecimiento de los vientos del oeste alrededor de la Antártida, como consecuencia del cambio climático. Afirmó que teme que el mecanismo natural de control de daños de la Antártida, en el que el deshielo libera agua dulce, que a su vez atrapa el agua cálida de las profundidades y finalmente permite que se forme más hielo marino, se haya roto de forma irreversible.

El debilitamiento de la estratificación oceánica amenaza, en cambio, con crear una nueva y peligrosa retroalimentación en la que las potentes corrientes de convección traen aún más agua cálida y salada de las profundidades, lo que conduce a una pérdida descontrolada de hielo.

“Creemos que esto podría ser un cambio de régimen, un cambio en el sistema oceánico y glacial, en el que hay menos hielo de forma permanente”, señaló Silvano.

“Tenemos que encontrar formas de monitorear el sistema, porque está cambiando muy rápidamente”

Wolfgang Rack, glaciólogo de la Universidad de Canterbury en Nueva Zelanda, quien no participó en la investigación, dijo que el registro satelital aún no es lo suficientemente largo como para demostrar si el aumento en la salinidad es una anomalía o un nuevo estado normal, no obstante, añadió: “Es bastante improbable que se trate de una simple anomalía, porque la señal es muy significativa.”

Zhaomin Wang, oceanógrafo de la Universidad de Hohai en Nankín, China, que no participó en la investigación, afirmó que el estudio era un “resultado muy sólido,” pero advirtió que aún es demasiado pronto para atribuir de forma concluyente el retroceso del hielo marino a la surgencia. “Es bastante difícil desentrañar la causa y el efecto entre el cambio del hielo marino antártico y el cambio de la salinidad de la superficie”, dijo, “porque es un sistema acoplado, lo que dificulta determinar qué proceso inicia los cambios”.

Para Haumann, los hallazgos muestran lo crucial que es la nueva tecnología para rastrear los cambios en el océano Austral. “Tenemos que encontrar formas de monitorear el sistema, porque está cambiando muy rápidamente”, dijo. “Esta es una de las regiones más distantes de la Tierra, pero una de las más críticas para la sociedad. La mayor parte del exceso de calor que tenemos en el sistema climático va a parar a esta región, y esto nos ha ayudado a mantener el planeta a una tasa de calentamiento relativamente moderada”.

“Ahora no sabemos realmente qué va a pasar con eso», dijo.”

Bill Morris, Escritor científico

This translation by Saúl A. Villafañe-Barajas (@villafanne) was made possible by a partnership with Planeteando and Geolatinas. Esta traducción fue posible gracias a una asociación con Planeteando y Geolatinas.

Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

The report of the Board of Inquiry into the 14 January 2025 McCrae Landslide

Thu, 10/09/2025 - 06:22

The tribunal has concluded that a major leak in a water main, which released 40 million liters of water, triggered the failure

On 14 January 2025, the McCrae landslide occurred on the Mornington Peninsula in Australia. The site is located at [-38.34631, 144.93500]. I posted about this event at the time, noting that local residents had observed large volumes of water bubbling out of the ground in the period leading up to the failure. The landslide caused property damage and it resulted in serious injuries to one person.

In the aftermath of the landslide, the Victorian Government established a formal Independent Board of Inquiry into the events – a rare response so a landslide of this type. That tribunal has now published its conclusions in a report that is available online. It contains 30 recommendations some of which are specific to this site, whilst others cover landslide management and response more generally. These have widespread application, and it is worth a read.

The Report includes this image of the aftermath of the McCrae landslide:-

The 14 January 2025 McCrae landslide. Image from the Board of Inquiry report.

The report is admirably definitive about the causes of the landslide. It notes that there were previous periods of movement on the slope, but that the events of 14 January 2025 started with movement that was observed on 5 January 2925. It states that:

“Water was the trigger of the 5 January 2025 landslide and the McCrae Landslide. The source of that water was the burst water main at Bayview Road.”

The Board of Inquiry has calculated that the burst water main released about 40 million litres of water. The leak started at least 150 days before the landslide occurred, and there were numerous reports made to the water authority that there were problems at the site. However, the leak was not detected and repaired.

As I noted above, some of the recommendations pertain to landslide management more generally. One (Recommendation 7) highlights the needs for proper protocols to respond to landslide incidents (this is a widespread problem). Others (Recommendations 18 and 21) highlight the need for better training and education with regard to landslides, whilst there is also a focus on a better understanding and identification of landslide risk (Recommendations 20 and 23), and clarity about responsibility for landslide management (Recommendations 29 and 30).

News reports in Australia indicate that the Victorian Government has accepted all the findings of the McCrae landslide inquiry. Plans are now in place to ensure that the issues at the site are addressed and that the householders who have suffered such heavy losses are treated appropriately.

Return to The Landslide Blog homepage Text © 2023. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Sharpiegate Scientist Takes the Helm at NOAA

Wed, 10/08/2025 - 18:23
body {background-color: #D2D1D5;} Research & Developments is a blog for brief updates that provide context for the flurry of news regarding law and policy changes that impact science and scientists today.

Meteorologist and atmospheric scientist Neil Jacobs was confirmed as the new leader of NOAA on Tuesday evening.

Jacobs has a PhD in atmospheric science and worked in weather monitoring before joining NOAA in 2018.

But Jacobs is perhaps most well-known for his role in “Sharpiegate.” In 2019, during his first term, President Trump claimed that Alabama was in the path of Hurricane Dorian. After the claim met pushback, the president held a press conference and showed members of the media a map of the hurricane’s path that had been altered with a Sharpie, and NOAA issued a statement backing Trump’s claim.

President Trump displayed a map that altered the projected path of Hurricane Dorian with Sharpie. (The inked-in addition extends the white “Potential track area” and includes the Florida Panhandle, southern Georgia, and southeastern Alabama.) Credit: The White House

At the time, Jacobs was the acting NOAA administrator, and had approved the unsigned statement. A National Academy of Public Administration report later found that his involvement with the statement violated NOAA’s scientific integrity policy.

At Jacobs’ confirmation hearing in July, he said that, if a similar situation arose in the future, he would handle it differently. He also said he supported proposed cuts to NOAA’s budget, and that his top priorities included staffing the National Weather Service office, reducing the seafood trade deficit, and “return[ing] the United States to the world’s leader in global weather forecast modeling capability.”

 

Jacobs made no mention of climate change in his opening statement. When asked whether he agreed that human activities are the dominant cause of observed warming over the last century, he noted “that natural signals are mixed in there” but that “human influence is certainly there” too.

The Senate voted 51-46 to confirm Jacobs, in a session during which they also confirmed a cluster of attorneys and ambassadors (including former NFL star Herschel Walker as ambassador to the Bahamas).

Carlos Martinez, a senior climate scientist at the Union of Concerned Scientists, expressed concern in a statement published before Jacobs’ confirmation hearing.

“Despite his relevant expertise and career experience, Dr. Jacobs has already demonstrated he’s willing to undermine science and his employees for political purposes as he did during the infamous ‘Sharpiegate’ scandal,” Martinez wrote.

Bluesky users reacted to the news. Credit: Michael Battalio @battalio.com via Bluesky‬

Others were more cautiously optimistic, noting his experience as a scientist. “It could be worse,” noted one Redditor. “He’s an actual atmospheric scientist and a known quantity.”

“I’m hopeful that he’s learned how to fight within the political system — because he is going to have to fight,” former NOAA administrator Rick Spinrad told Bloomberg in August.

—Emily Gardner (@emfurd.bsky.social), Associate Editor

These updates are made possible through information from the scientific community. Do you have a story about how changes in law or policy are affecting scientists or research? Send us a tip at eos@agu.org. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

How Might Leftover Corn Stalks Halt Fugitive Carbon?

Wed, 10/08/2025 - 13:12

Across North America, abandoned oil and gas wells are leaking carbon dioxide and other greenhouse gases into the atmosphere. As of 2022, there were more than 123,000 documented orphaned wells in the United States, but researchers suspect the real number may be anywhere from 310,000 to 800,000.

Abandoned wells can be plugged by filling the drill holes with water or oil, but that process requires a substantial amount of liquid, as well as liquid assets. It would take 26 billion gallons—an amount that would fill almost 40,000 Olympic-size swimming pools—to plug 120,000 wells, with each well costing up to $1 million. (That’s $120 billion in total.)

“On the one hand, you have these underutilized waste products. On the other hand, you have abandoned oil wells that need to be plugged. It’s an abundant resource meeting an urgent demand.”

In a new study published in Energy Conversion and Management, researchers weighed the possibility of plugging wells and sequestering carbon with bio-oil made from vegetative waste. Their goal was to see whether the production of bio-oil could be a source of revenue for farmers while the oil itself could prevent greenhouse gases from escaping from abandoned wells.

“On the one hand, you have these underutilized waste products,” explained Mark Mba-Wright in a statement. Mba-Wright is a coauthor of the new paper, engineering professor at Iowa State University, and systems engineer at its Bioeconomy Institute. “On the other hand, you have abandoned oil wells that need to be plugged. It’s an abundant resource meeting an urgent demand.”

Biomass Bounty

The production of bio-oil starts with pyrolysis, the process in which vegetative waste decomposes under intense heat (≥1,000℉, or ~538°C°) in an oxygen-free environment. Pyrolysis produces three products: a liquid (bio-oil), a solid (biochar), and a gas. The gas is used to fuel future pyrolysis efforts, biochar can be sold as a soil amendment, and storing bio-oil underground has long been touted as an effective way to sequester carbon.

The fields and forests of the United States are ripe with plants and thus vegetative waste that could be used to produce bio-oil. For example, “for every kilogram of corn that the farmer produces, an additional kilogram of corn stover or biomass is produced,” said Mba-Wright.

Corn stover—the stalks, husks, and cobs left over after harvest—is a leading source of biomass for Midwestern farmers. In the western United States, woody forest debris is more widely available. To address this diversity of resources, Mba-Wright and his colleagues investigated the bio-oil potential of corn stover, switchgrass, pine, tulip poplar, hybrid poplar, and oriented strand board (an engineered product made with wood flakes and adhesives).

In partnership with Charm Industrial, a private carbon capture company, Mba-Wright and his colleagues sought to understand whether corn stover and other feedstocks would be suitable for bio-oil production, whether the process would be economically helpful to farmers, and whether the processing-to-plugging pathway would be effective at sequestering carbon.

Small-Scale Pyrolysis Feasibility

Charm has been using pyrolysis at a commercial scale for years, said Mba-Wright, but building large plants requires significant capital investment and risk.

Instead of a large, stationary plant, the team modeled the environmental and economic feasibility of an array of mobile pyrolysis units that could be located on farms. “You can imagine a farmer might be using his tractor or his combine on his field, and on the back of the unit have one of Charm’s pyrolysis units. And instead of letting the waste go to the field, it would be processed on site,” Mba-Wright explained.

In the modeled mobile pyrolysis scenario, the researchers found that the process could generate 5.3 tons of bio-oil and 2.5 tons of biochar for every 10 tons of corn stover. This estimate is slightly lower than the yield of bio-oil produced by other pyrolysis methods but is still reasonable.

The process of taking each feedstock from harvest to well plugging was carbon negative, the scientists found. Switchgrass had the highest carbon footprint at −0.62 kilogram of carbon dioxide (CO2) to kilogram of oil, and oriented strand board had the lowest carbon footprint at −1.55 kilograms of CO2 to kilogram of oil. Corn was in the middle, weighing in at −1.18 kilograms of CO2 to kilogram of oil.

An Array of Economics

Modeling indicated that the new pyrolysis process would be economically feasible as well, costing between $83.60 and $152 per ton of CO2. (The monetary difference accounts for the costs of including biochar sequestration.) These costs fall within the range of carbon credit commodity price ranges.

“The most important message is that there’s an economic case for carbon removal,” Mba-Wright said.

The scientists admit that to many individual farmers, however, this economic case might not seem like a bargain: The base capital cost of each pyrolysis unit would be $1.28 million.

“My impression was they were looking at this from the firm perspective, not exactly the farmer perspective,” said Sarah Sellars, an assistant professor of agricultural economics at South Dakota State University. “A base capital cost of 1.28 million? No farmer would invest in that. If they were going to spend $1.28 million, they’d probably buy more land.”

Mba-Wright said that although the costs are, indeed, significant, there are different options to consider. “Farmers could lease the equipment,” he suggested, adding that businesses could offer a lease-to-own option. “There are also intermediate solutions,” he added, “where you may have a unit that’s shared among farms.”

He acknowledged other challenges as well. Farmers “have a tight schedule during harvesting and planting. They may not want to have to operate another piece of equipment, so that’s something that suppliers of the unit will have to develop: a system that is easy for the farmer to use.”

Life Is Messy

On paper, sequestering carbon while halting fugitive emissions from orphan wells looks like a slam dunk.

But carbon and climate are complicated. “We can look at things from theory and economics and carbon mitigation, but then when it comes to these other variables, like the policy and the infrastructure to implement them, I think we should be cautious,” said Sellars. “Unfortunately, a lot of scientists don’t like to hear that, though. I mean, that’s why economics is called a dismal science.”

Lauren Gifford, director of the Soil Carbon Solutions Center at Colorado State University, agreed, adding that “a lot of what we’re reading in articles and things are promises or goals, but the industry just hasn’t taken off enough for us to see how these things play out at scale. A lot of what we see now is either hope or plans, and we know that real life is messy.”

—Sarah Derouin (@sarahderouin.com), Science Writer

Citation: Derouin, S. (2025), How might leftover corn stalks halt fugitive carbon?, Eos, 106, https://doi.org/10.1029/2025EO250378. Published on 8 October 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Magnetic “Switchback” Detected near Earth for First Time

Wed, 10/08/2025 - 13:12
Source: Journal of Geophysical Research: Space Physics

In recent years, NASA’s Parker Solar Probe has given us a close-up look at the Sun. Among the probe’s revelations was the presence of numerous kinks, or “switchbacks,” in magnetic field lines in the Sun’s outer atmosphere. These switchbacks are thought to form when solar magnetic field lines that point in opposite directions break and then snap together, or “reconnect,” in a new arrangement, leaving telltale zigzag kinks in the reconfigured lines.

McDougall and Argall now report observations of a switchback-shaped structure in Earth’s magnetic field, suggesting that switchbacks can also form near planets.

The researchers discovered the switchback while analyzing data from NASA’s Magnetospheric Multiscale mission, which uses four Earth-orbiting satellites to study Earth’s magnetic field. They detected a twisting disturbance in the outer part of Earth’s magnetosphere—the bubble of space surrounding our planet where a cocktail of charged particles known as plasma is pushed and pulled along Earth’s magnetic field lines.

Closer analysis of the disturbance revealed that it consisted of plasma both from inside Earth’s magnetic field and from the Sun. The Sun constantly emits plasma, known as the solar wind, at supersonic speeds in all directions. Most of the solar wind headed toward Earth deflects around our magnetosphere, but a small amount penetrates and mixes with the plasma already within the magnetosphere.

This illustration captures the signature zigzag shape of a solar switchback. Credit: NASA Goddard Space Flight Center/Conceptual Image Lab/Adriana Manrique Gutierrez

The researchers observed that the mixed-plasma structure briefly rotated and then rebounded back to its initial orientation, leaving a zigzag shape that closely resembled the switchbacks seen near the Sun. They concluded that this switchback most likely formed when magnetic field lines carried by the solar wind underwent magnetic reconnection with part of Earth’s magnetic field.

The findings suggest that switchbacks can occur not only close to the Sun, but also where the solar wind collides with a planetary magnetic field. This could have key implications for space weather, as the mixing of solar wind plasma with plasma already present in Earth’s magnetosphere can trigger potentially harmful geomagnetic storms and aurorae.

The study also raises the possibility of getting to know switchbacks better by studying them close to home, without sending probes into the Sun’s corona. (Journal of Geophysical Research: Space Physics, https://doi.org/10.1029/2025JA034180, 2025)

—Sarah Stanley, Science Writer

Citation: Stanley, S. (2025), Magnetic “switchback” detected near Earth for first time, Eos, 106, https://doi.org/10.1029/2025EO250374. Published on 8 October 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

The 17 December 2024 Takhini River landslide and river-ice tsunami, Whitehorse, Yukon, Canada

Wed, 10/08/2025 - 07:23

A major slope collapse in frozen sediments in Canada highlights the role of progressive failure.

Back in January of this year, I posted fascinating a piece by Derek Cronmiller of the Yukon Geological Survey about the 17 December 2024 Takhini River landslide and river-ice tsunami, which occurred in Whitehorse, Yukon, Canada. The location of this landslide is at [60.8611, -135.4180]. As a reminder, this is a figure from his post showing the landslide:-

Surface elevation change detection comparing 2013 lidar DTM to a 2025 DSM created from UAV photos for the Takhini River landslide.

Derek has now published a more detailed article in the journal Landslides (Cronmiller 2025) that provides the definitive description of this event. One element of the article caught my attention. The piece examines in some detail the initiation of the landslide. Cronmiller (2025) observes that:-

“In the case of the 17 December 2024 Takhini landslide, all common triggers are conspicuously absent, and the timing appears to be random.”

The article concludes (rightly in all probability) that the initiating mechanism was progressive failure – i.e. that the slope underwent brittle failure through a tertiary creep mechanism. Under these circumstances no external trigger is needed.

As such, Cronmiller (2025) is much more than a simple (although fascinating) case study. As Derek writes:

“While progressive failure mechanisms are commonly discussed in rockslide and gravitational slope deformation literature, their role in producing landslides in surficial sediments is discussed relatively infrequently as acute triggers commonly mask the effect of this phenomenon’s contribution to slope failure. This case study provides an important example to show that acute triggers are unnecessary to produce landslides in dry brittle surficial sediments.”

I wholeheartedly agree.

Reference

Cronmiller, D. 2025 The 17 December 2024 Takhini River landslide and river-ice tsunami, Whitehorse, Yukon, Canada. Landslides. https://doi.org/10.1007/s10346-025-02622-8

Return to The Landslide Blog homepage Text © 2023. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

New Maps of Natural Radioactivity Reveal Critical Minerals and More

Tue, 10/07/2025 - 13:09

A helicopter flies low over the Appalachian Mountains, moving slowly above mostly forested lands of Maryland, Pennsylvania, and West Virginia. The aircraft carries a blue-and-white box holding instrumentation to detect unseen photon gamma rays created by radioactive decay within the rocks below. When a gamma ray reaches a specially designed crystal inside the box, it produces a flash of light—a reaction called scintillation—that provides information about the gamma ray’s properties and origins.

Measurements of natural, low-level radioactivity have been used in geologic applications for nearly a century.

Scintillation is the foundation of radiometric methods that provide passive and rapid assessments of the geochemical compositions of rock samples, cores, and outcrops, as well as of swaths of Earth’s surface. These methods measure ambient gamma ray energy signatures to determine which isotopes most likely produced them. Such data are then used to create maps of Earth’s surface and near subsurface where radioactive elements are present, even in low amounts.

Measurements of natural, low-level radioactivity have been used in geologic applications for nearly a century. But a new phase of open-access, high-resolution, airborne data collection funded and executed through the U.S. Geological Survey’s (USGS) Earth Mapping Resources Initiative (MRI) is providing novel insights for geologic mapping, critical minerals research, mine waste studies, and other applications.

From Geiger Tubes to Spectrometry

Radiometric methods developed rapidly following the discovery of radioactivity in 1896. Only a few decades later, petroleum explorations in the 1930s made use of Geiger tubes and ionization chambers to measure gamma rays emitted from boreholes. These early methods, which counted the total number of gamma rays detected, couldn’t discern individual radioelements, but they could reveal different sedimentary layers.

By the 1940s, scintillation crystals were light enough that instruments could be carried aboard airplanes for use in total-count radiometric surveys. And by the late 1960s, gamma ray sensors were accurate enough to distinguish specific source isotopes, providing capabilities for full gamma ray spectrometry [Duval, 1980; International Atomic Energy Agency, 2003].

Airborne gamma ray spectrometry provides rapidly and continuously collected geochemical information over large areas that is impossible to obtain from the ground.

During an airborne radiometric survey, an airplane, helicopter, or drone flies back and forth in a “mow-the-lawn” pattern to produce map view estimates of radioelement concentrations. The spatial resolution of the data depends on how closely the survey flight lines are spaced and on flying height: The farther the sensor is from the ground, the wider the area that is imaged—and the lower the resolution—at each point in time.

The results represent gamma rays emitted from roughly the upper 50 centimeters of the ground surface, whether bedrock or soil; shielding of these rays by vegetation is typically limited. Although many radioelements produce gamma rays, potassium, uranium, and thorium are the primary elements evaluated because they are relatively abundant on Earth and their decay sequences generate gamma ray signatures strong enough to be measured by airborne sensors [Minty, 1997; International Atomic Energy Agency, 2003].

Airborne gamma ray spectrometry provides rapidly and continuously collected geochemical information over large areas that is impossible to obtain from the ground. These surveys are often paired with simultaneous collection of magnetic data because the optimal flying speeds and heights are similar for both. These methods, when combined with ground truth geologic observations and sample analyses, offer a powerful tool for geologic mapping.

Resource Exploration Drives Data Collection

The earliest airborne radiometric datasets were total-count surveys collected primarily for uranium exploration by Australia, Canada, the Soviet Union, and the United States immediately after World War II. In the 1970s, continued interest in uranium led to initiation of the National Uranium Resource Evaluation (NURE), which supported airborne gamma ray spectrometry surveys measuring potassium, thorium, and uranium over the conterminous United States and parts of Alaska. These data, along with concurrent magnetic data, were released publicly. Around the same time, similar interest in Australia and Canada motivated regional-scale coverage in those countries.

To achieve national coverage, the NURE surveys were designed with very widely spaced flight lines 5–10 kilometers apart, and only a few areas were chosen for higher-resolution data collection. The data were useful primarily for reconnaissance rather than detailed exploration.

In the decades following the NURE surveys, sensor and processing technology improved remarkably, but only a limited number of public high-resolution radiometric surveys—covering about 1% of the country’s area—were flown in the United States (Figure 1). The lack of radiometric data was even more severe than that of magnetic data, which by 2018 covered almost 5% of the country [Drenth and Grauch, 2019]. Magnetic surveys were more common, perhaps because of their use for mapping buried faults, folds, and other geologic features in studies of mineral resources, natural hazards, and water resources (Figure 1).

Fig. 1. This map shows the areas covered by high-resolution airborne surveys across the conterminous United States before and since the launch of the Earth Mapping Resources Initiative (MRI). Radiometric surveys typically also include magnetic data collection, but the converse is not always the case. (“high resolution” is defined here as “Rank 1” or “Rank 2” using the nomenclature of Johnson et al. [2019] and Drenth and Grauch [2019] for radiometric and magnetic surveys, respectively. These rankings consider a variety of survey conditions, including the flight line spacing, flying height, whether GPS navigation was used, and whether data were recorded digitally.)

Since 2019, Earth MRI has been addressing this data scarcity, with the goal of improving knowledge of domestic critical mineral resources and the geologic regimes, or frameworks, within which they form and concentrate. Critical mineral resources such as lithium, graphite, rare earth elements (REEs), and many others are commodities that are essential for the U.S. economy and security but are at risk from supply chain disruptions. They are key components in numerous technologies, from cell phones and medical devices to advanced defense systems and renewable energy technologies.

These datasets and interpretations can also inform studies in other disciplines, such as of earthquake hazards.

Earth MRI takes a multidisciplinary approach that includes geologic mapping and collection of new data using lidar, airborne geophysical methods, and analyses of sample geochemistry, mineralogy, and geochronology. These datasets and interpretations, all freely and publicly available, provide broad information about critical minerals, their mineralizing systems, and their geologic frameworks. Such information can also inform studies in other disciplines, such as of earthquake hazards, and is especially useful for advising land use planners (e.g., in making decisions about setting areas for natural preservation, grazing, and recreation) and for informing and reducing the economic risk of costly mineral resource exploration.

Magnetic and radiometric data are the foundation of Earth MRI’s airborne geophysical coverage because they provide valuable information about geology, including areas under cover and vegetation, and their relatively low cost enables surveying of large areas. Additional funding from the 2021 Infrastructure Investment and Jobs Act has facilitated targeted studies using both hyperspectral and electromagnetic methods, which provide complementary imaging.

Bird’s-Eye Views of Geology Fig. 2. Airborne radiometric data collected over the Appalachian Valley and Ridge Province in Maryland, Pennsylvania, and West Virginia are shown here using a ternary color scale (magenta = potassium (K), cyan = thorium (Th), yellow = uranium (U)). These data, which are available from the USGS, highlight different lithologies of shallow and outcropping sedimentary layers. The image is draped over a shaded relief image of lidar-derived elevation for context. Click image for larger version.

Heavily folded and faulted sedimentary rocks of the Appalachian Valley and Ridge Province provide a dramatic example of the value of Earth MRI’s data collection for geologic applications. Earth MRI supported new airborne magnetic and radiometric data collection in 2022–2023 in this region to better understand the geologic framework of critical minerals in metal-bearing shales and manganese-iron sedimentary layers (Figure 2).

The data illustrate a diverse array of lithologies in close proximity (sometimes <1 kilometer apart), reflecting the structure and stratigraphy of layered sedimentary rocks. They reveal outcrops of shale formations containing varying amounts of potassium, thorium, or both, highlighting compositional information. Weathered carbonates and carbonate regolith show only elevated levels of potassium, whereas quartz sandstone is mostly devoid of radioelements except for sparse patches of uranium enrichment.

Accurate interpretation of airborne radiometric datasets requires complementary geologic knowledge from other sources because the presence of potassium, thorium, and uranium can be linked to several different minerals. For example, in hard rock terranes, elevated potassium often indicates mica and potassium feldspar in granites, granodiorites, or felsic volcanic rocks. However, elevated potassium may also indicate a history of hydrothermal alteration that formed potassium-rich minerals associated with economically significant ores, such as gold-copper porphyry deposits [e.g., Shives et al., 2000].

Radiometric detections of potassium can illuminate broad transport pathways from sites of erosion to sites of deposition.

In sedimentary environments, elevated potassium measurements may represent minerals such as illite. Or they may indicate recently eroded sands (from which potassium has not been dissolved and mobilized), such as those found in river floodplains. In those scenarios, radiometric detections of potassium can therefore illuminate broad transport pathways from sites of erosion to sites of deposition [Shah et al., 2021].

Colocated magnetic field data can provide needed complementary constraints on geologic interpretations, especially within hard rock terranes. For example, both mafic rocks and quartz sandstone usually show similarly low potassium, thorium, and uranium signatures. However, mafic rocks often express prominent magnetic anomalies, unlike quartz sandstone, allowing scientists to easily distinguish the two.

Critical Mineral Frameworks

In addition to their use for fundamental geologic mapping, new Earth MRI datasets are providing key information on domestic critical minerals—and in some cases imaging them directly. This is especially the case for REEs because many minerals that host REEs also contain thorium. For example, at California’s Mountain Pass, presently the only site of active REE production in the United States, airborne radiometric data show elevated thorium, uranium, and potassium concentrations over mineralized areas [Ponce and Denton, 2019].

Airborne radiometric surveys have also led to discoveries of critical minerals. In one case, data from a remote part of northern Maine revealed a highly localized thorium and uranium anomaly. The finding motivated a subsequent effort in which a multidisciplinary and multi-institutional team quickly investigated the area on foot. By combining geophysical data, geologic mapping, and analyses of rock samples, they discovered an 800- × 400-meter area with high concentrations of REEs, niobium, and zirconium, all considered critical commodities [Wang et al., 2023]. The depth of the mineralization, and thus the potential economic value, is not yet known, but a deposit in Australia with similar rock type, composition, and areal extent has been valued in the billions of dollars.

In another study, researchers used Earth MRI radiometric data collected over Colorado’s Wet Mountains to map REE mineralization in carbonatite dikes, veins in alkaline intrusions, and syenite dikes [Magnin et al., 2023]. Additional analyses of thorium levels and magnetic anomalies provided insights into the geologic environment in which these REE-bearing features formed, namely, that the mineralization likely occurred as tectonic forces stretched and rifted the crust in the area.

And over South Carolina’s Coastal Plain sediments, Shah et al. [2021] imaged heavy mineral sands containing critical commodities: REEs, titanium, and zirconium (Figure 3). These researchers are developing new constraints on critical mineral resource potential within individual geologic formations by evaluating the statistical properties of thorium anomalies.

Fig. 3. Critical minerals in ancient shoreline sands near Charleston, S.C., are highlighted in this map of airborne radiometric thorium data draped over lidar-derived shaded relief topography. Thorium is present in the mineral monazite, which also contains rare earth elements. Detecting Impacts from—and on—Humans

A new frontier in critical mineral studies focuses on the potential to tap unconventional resources, especially those present in mining waste and tailings.

A new frontier in critical mineral studies focuses on the potential to tap unconventional resources, especially those present in mining waste and tailings. Mining and mine waste features are scattered across the United States, sometimes presenting environmental or public health hazards. If critical minerals could be reclaimed economically from waste, proceeds could help to fund cleanup actions.

Early work with airborne data on this frontier focused, for example, on examining anomalous thorium concentrations in tailing piles from abandoned iron mines in the eastern Adirondack Highlands of upstate New York. Researchers found that the piles that contained REEs in the mineral apatite expressed thorium anomalies, whereas other piles were devoid of these critical commodities.

More recently, scientists identified uranium anomalies in datasets collected over stacks of phosphate mining waste, known as phosphogypsum stacks or “gypstacks,” in Florida (Figure 4). And data collected over coal mining waste sites in the Appalachian Mountains show elevated potassium, thorium, and uranium. Mine waste in both these areas is now being studied more closely as possible REE resources.

Fig. 4. Uranium anomalies (yellow and red) highlight mining areas, waste stacks, and, in some areas, dirt roads in this image of airborne radiometric data collected over the phosphate mining district in central Florida. Click image for larger version. Credit: background imagery: Google, Airbus; data: USGS

Radiometric surveys can also shed light on natural geologic hazards that affect human health. Radon gas, a well-known risk factor for lung cancer, is produced from the breakdown of radioelements, especially uranium, in soil and rock. By imaging areas with elevated uranium, radiometric surveys can delineate areas with higher radon risk.

In the 1980s, the U.S. Department of Energy commissioned a total-count survey over a small section of the Reading Prong in Pennsylvania, a geologic unit with known instances of uranium that also extends into New Jersey and New York, to map radon hazards. New Earth MRI datasets collected west of that part of Pennsylvania and elsewhere cover much larger areas and distinguish uranium, thorium, and potassium, providing a means for extensive radon risk evaluation.

Much More to Explore

Earth MRI airborne magnetic and radiometric surveys funded as of September 2025 have provided a roughly 18-fold increase in publicly available high-resolution radiometric data compared with what was available in 2018, and additional surveys are planned for 2026. However, the new total still represents only about 19% of the area of the United States (including Alaska, Hawaii, and Puerto Rico), so there is still a long way to go to achieve full national coverage.

A drone collects radiometric data over mining waste piles in southwestern New Mexico. These and other mine waste piles are being studied to see whether they hold critical mineral resources. Credit: Anjana Shah/USGS, Public Domain

The new open-access data present a wide variety of opportunities for study, from qualitative revisions of geologic maps to quantitative analyses that address questions about critical mineral resources and other societally important topics. These data are also inspiring innovative approaches, such as drone-based surveys using new ultralightweight sensors that can provide unprecedented spatial resolution, with uses in detailed mine waste studies, radon evaluation, and other applications [e.g., Gustafson et al., 2024]. Another new approach combines airborne radiometric data with sample geochemical data to evaluate critical minerals in clays [Iza et al., 2018].

Other novel applications that encourage economic development, maintain national security, and enhance public safety are waiting to be developed and explored.

Acknowledgments

We thank Tom L. Pratt and Dylan C. Connell for helpful reviews. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. government.

References

Drenth, B. J., and V. J. S. Grauch (2019), Finding the gaps in America’s magnetic maps, Eos, 100, https://doi.org/10.1029/2019EO120449.

Duval, J. S. (1980), Radioactivity method, Geophysics, 45(11), 1,690–1,694, https://doi.org/10.1190/1.1441059.

Gustafson, C., et al. (2024), Mine waste identification and characterization using airborne and uncrewed aerial systems radiometric geophysical surveying, Geol. Soc. Am. Abstr. Programs, 56(5), 1-6, https://doi.org/10.1130/abs/2024AM-403640.

International Atomic Energy Agency (2003), Guidelines for Radioelement Mapping Using Gamma Ray Spectrometry Data, IAEA-TECDOC-1363, Vienna.

Iza, E. R. H. F., et al. (2018), Integration of geochemical and geophysical data to characterize and map lateritic regolith: An example in the Brazilian Amazon, Geochem. Geophys. Geosyst., 19(9), 3,254–3,271, https://doi.org/10.1029/2017GC007352.

Johnson, M. R., et al. (2019), Airborne geophysical survey inventory of the conterminous United States, Alaska, Hawaii, and Puerto Rico (ver. 4.0, April 2023), data release, U.S. Geol. Surv., Reston, Va., https://doi.org/10.5066/P9K8YTW1.

Magnin, B. P., Y. D. Kuiper, and E. D. Anderson (2023), Ediacaran-Ordovician magmatism and REE mineralization in the Wet Mountains, Colorado, USA: Implications for failed continental rifting, Tectonics, 42(4), e2022TC007674, https://doi.org/10.1029/2022TC007674.

Minty, B. (1997), Fundamentals of airborne gamma-ray spectrometry, AGSO J. Aust. Geol. Geophys., 17, 39–50.

Ponce, D. A., and K. M. Denton (2019), Airborne radiometric maps of Mountain Pass, California, U.S. Geol. Surv. Sci. Invest. Map, 3412-C, scale 1:62,500, https://doi.org/10.3133/sim3412C.

Shah, A. K., et al. (2021), Mapping critical minerals from the sky, GSA Today, 31(11), 4–10, https://doi.org/10.1130/GSATG512A.1.

Shives, R. B., B. W. Charbonneau, and K. L. Ford (2000), The detection of potassic alteration by gamma-ray spectrometry—Recognition of alteration related to mineralization, Geophysics, 65, 2,001–2,011, https://doi.org/10.1190/1.1444884.

Wang, C., et al. (2023), A recently discovered trachyte-hosted rare earth element-niobium-zirconium occurrence in northern Maine, USA, Econ. Geol., 118(1), 1–13, https://doi.org/10.5382/econgeo.4993.

Author Information

Anjana K. Shah (ashah@usgs.gov), U.S. Geological Survey, Lakewood, Colo.; Daniel H. Doctor, U.S. Geological Survey, Reston, Va.; Chloe Gustafson, U.S. Geological Survey, Lakewood, Colo.; and Alan D. Pitts, U.S. Geological Survey, Reston, Va.

Citation: Shah, A. K., D. H. Doctor, C. Gustafson, and A. D. Pitts (2025), New maps of natural radioactivity reveal critical minerals and more, Eos, 106, https://doi.org/10.1029/2025EO250370. Published on 7 October 2025. Text not subject to copyright in the United States.
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Ice Diatoms Glide at Record-Low Temperatures

Tue, 10/07/2025 - 13:08

Hidden in Arctic sea ice are microscopic organisms that do more than eke out a meager existence on scraps of light filtered through their frozen habitat. New research has shown that ice diatoms have adapted to move efficiently through the ice, allowing them to navigate to better sources of light and nutrients. During in situ and laboratory experiments, ice diatoms glided through the ice roughly 10 times faster than diatoms from temperate climates and kept gliding even at −15°C, the lowest temperature recorded for single-celled organisms.

“People often think that diatoms are at the mercy of their environment,” said Manu Prakash, a bioengineering researcher at Stanford University in California and lead researcher on this discovery. “What we show in these ice structures is that these organisms can actually move rapidly at these very cold temperatures to find just the right home. It just so happens that home is very cold.”

These findings, published in the Proceedings of the National Academy of Sciences of the United States of America, may help scientists understand how microorganisms and polar ecosystems respond to climate change.

Gliding Through Life Researchers drilled several cores from sea ice in the Chukchi Sea to understand the movement patterns of diatoms. Credit: Natalie Cross

Diatoms are microscopic, single-celled algae that photosynthesize. Up to 2 million species of diatoms produce at least 20% of the oxygen we breathe and form the backbone of ecosystems throughout the world, from the humid tropics to the frigid poles. Scientists have known since the 1960s that diatoms live within and move through the ice matrix but have been unable to decipher how they do it.

“Ice is an incredible porous architecture of highways,” Prakash explained. “Light comes from the top in the ice column, and nutrients come from the bottom. There is an optimal location that [a diatom] might want to be, and that can only be possible with motility.” (Motility is the ability of an organism to expend energy to move independently.)

Prakash and a team of researchers sought to observe ice diatoms’ movements in situ and so set off for the Chukchi Sea aboard the R/V Sikuliaq. On a 45-day expedition in 2023, they collected several cores from young sea ice, extracted diatoms from the cores, and studied the diatoms’ movements on and within icy surfaces under a temperature-controlled microscope customized for subzero temperatures.

At temperatures down to −15°C, Arctic ice diatoms actively glided on ice surfaces and within ice channels. The researchers said that this is the lowest temperature at which gliding motility has been observed for a eukaryotic cell.

“Life is not under suspension in these ultracold temperatures. Life is going about its business.”

“Life is not under suspension in these ultracold temperatures,” Prakash said. “Life is going about its business.”

“This is a notable discovery,” said Julia Diaz, a marine biogeochemist at Scripps Institution of Oceanography in San Diego. “These diatoms push the lowest known temperature limit of motility to a new extreme, not just compared to temperate diatoms, but also compared to more distantly related organisms.” Diaz was not involved with this research.

“Since the 1960s, when J. S. Bunt first described sea ice communities and observed that microbes were concentrated in specific layers of the ice, it has been obvious that they must have a means to navigate through ice matrices,” said Brent Christner, an environmental microbiologist at the University of Florida in Gainesville who also was not involved with this research. “This study makes it clear that some microbes traverse gradients in the ice by gaining traction on one of the most slippery surfaces known!”

“While these diatoms are clearly ice specialists, they nevertheless appear to be equipped with the equivalent of all-season tires!”

The team compared the movement of ice diatoms to those of diatoms from temperate climates. On both icy and glass surfaces under the same conditions, ice diatoms moved roughly 10 times faster than temperate diatoms. In cold conditions on icy surfaces, temperate diatoms lost their ability to move completely and just passively drifted along. These experiments show that ice diatoms adapted specifically to their extreme environments, evolving a way to actively seek out better sources of light to thrive.

“I was surprised the ice diatoms were happily as motile on ice as glass, and much faster on glass that the temperate species examined,” Christner said. “While these diatoms are clearly ice specialists, they nevertheless appear to be equipped with the equivalent of all-season tires!”

On ice (left) and on glass (right) surfaces, ice diatoms (top) move faster than temperate diatoms (bottom). All experiments here were conducted at 0°C and are sped up 50 times to highlight the diatoms’ different gliding speeds. Credit: Zhang et al., 2025, https://doi.org/10.1073/pnas.2423725122, CC BY-NC-ND 4.0 Can Diatoms Adapt to Climate Change?

The Arctic is currently experiencing rapid environmental changes, warming several times faster than the rest of the world. Arctic climate change harms not only charismatic megafauna like polar bears, Prakash said, but microscopic ones, too.

“These ecosystems operate in a manner that every one of these species is under threat.”

Diatoms are “the microbial backbone of the entire ecosystem,” Prakash said. “These ecosystems operate in a manner that every one of these species is under threat.”

Prakash added that he hopes future conservation efforts focus holistically on Arctic ecosystems from the micro- to macroscopic. Future work from his own group aims to understand how diatoms’ gliding ability changes under different chemical conditions like salinity, as well as how the diatoms shape their icy environment.

“Scientists used to think that sea ice was simply an inactive barrier on the ocean surface, but discoveries like these reveal that sea ice is a rich habitat full of biological diversity and innovation,” Diaz said. “Sea ice extent is expected to decline as climate changes, which would challenge these diatoms to change the way they move and navigate their polar environment. It is troubling to think of the biodiversity that would be lost with the disappearance of sea ice.”

—Kimberly M. S. Cartier (@astrokimcartier.bsky.social), Staff Writer

Citation: Cartier, K. M. S. (2025), Ice diatoms glide at record-low temperatures, Eos, 106, https://doi.org/10.1029/2025EO250371. Published on 7 October 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Fatal landslides in July 2025

Tue, 10/07/2025 - 06:19

In July 2025, I recorded 71 fatal landslides worldwide, with the loss of 214 lives.

Each year, July is one of the key months for the occurrence of fatal landslides globally as the Asian monsoon season cranks up to full strength. Thus, it is time to provide an update on fatal landslides that occurred in July 2025. This is my dataset on landslides that cause loss of life, following the methodology of Froude and Petley (2018). At this point, the monthly data is provisional. I will, when I have time, write a follow up paper to the 2018 one that describes the situation since then.

In July 2025 I recorded 71 fatal landslides worldwide, with the loss of 214 lives. The average for the period from 2004 to 2016 was 58.1 fatal landslides, so this is considerably higher than the long term mean, although it is much lower than 2024, which saw 99 fatal landslides.

So, this is the monthly total graph for 2025 to the end of July:-

The number of fatal landslides to the end of July 2025 by month.

Plotting the data by pentad to the end of pentad 43 (29 July), the trend looks like this (with the exceptional year of 2024, plus the 2004-2016 mean, for comparison):-

The number of fatal landslides to 29 July 2025, displayed in pentads. For comparison, the long term mean (2004 to 2016) and the exceptional year of 2024 are also shown.

The data shows that the acceleration in the rate of fatal landslides occurred much later in the annual cycle than was the case in 2024. It was only late in the month that the rate started to approach that of 2024. Indeed for much of the month, the fatal landslide rate (the gradient of the line) is broadly similar to the long term mean, albeit with a much higher starting point.

But note also the distinct acceleration late in the month, which makes what then happened in August 2025 particularly interesting. Watch this space.

Notable events included the 8 July 2025 catastrophic debris flow at Rasuwagadhi in Nepal, but no single landslide killed more than 18 people in July 2025.

I often draw a link between the rate of fatal landslides and the surface air temperature. The Copernicus data shows that July 2025 was “0.45°C warmer than the 1991-2020 average for July with an absolute surface air temperature of 16.68°C“. It was the “third-warmest July on record, 0.27°C cooler than the warmest July in 2023, and 0.23°C cooler than 2024, the second warmest.”

Reference

Froude M.J. and Petley D.N. 2018. Global fatal landslide occurrence from 2004 to 2016Natural Hazards and Earth System Science 18, 2161-2181. https://doi.org/10.5194/nhess-18-2161-2018

Return to The Landslide Blog homepage Text © 2023. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Satellite Scans Can Estimate Urban Emissions

Mon, 10/06/2025 - 12:52
Source: AGU Advances

Because the hustle and bustle of cities is driven largely by fossil fuels, urban areas have a critical role to play in addressing global greenhouse gas emissions. Currently, cities contribute around 75% of global carbon dioxide (CO2) emissions, and urban populations are projected only to grow in the coming decades. Members of the C40 Cities Climate Leadership Group, a network of nearly 100 cities that together make up 20% of the global gross domestic product, have pledged to work together to reduce urban greenhouse gas emissions. Most of the cities have pledged to reach net zero emissions by 2050.

To meet these pledges, cities must accurately track their emissions levels. Policymakers in global cities have been relying on a “bottom-up” approach, estimating emissions levels on the basis of activity data (e.g., gasoline sales) and corresponding emissions factors (such as the number of kilograms of carbon emitted from burning a gallon of gasoline). However, previous studies found some regional variations in emissions estimates depending on which datasets are used, especially in certain geographic locations.

Ahn et al. tried a “top-down” approach, using space-based observations to estimate emissions for 54 C40 cities.

They used data from NASA’s Orbiting Carbon Observatory 3 (OCO-3) mission on board the International Space Station (ISS) to collect high-resolution data over global cities. OCO-3 uses a pair of mirrors called the Pointing Mirror Assembly to scan atmospheric CO2 levels as the ISS flies over a target city.

The researchers found that for the 54 cities, the satellite-based estimates match bottom-up estimates within 7%. On the basis of their measurements, the researchers also found that bottom-up techniques tended to overestimate emissions for cities in central East, South, and West Asia but to underestimate emissions for cities in Africa, East and Southeast Asia, Oceania, Europe, and North America.

The team also examined the link between emissions, economies, and populations. They found that wealthier cities tended to have less carbon intensive economies. For example, North American cities emit 0.1 kilogram of CO2 within their boundaries per U.S. dollar (USD) of economic output, whereas African cities emit 0.5 kilogram of CO2 per USD. They also found that residents living in bigger cities emit less CO2—cities with under 5 million people emit 7.7 tons of CO2 per person annually, whereas cities with more than 20 million people emit 1.8 tons per person, for instance.

The authors note that their findings show that satellite data may help cities better track emissions, improve global monitoring transparency, and support global cities’ efforts to mitigate emissions. (AGU Advances, https://doi.org/10.1029/2025AV001747, 2025)

—Sarah Derouin (@sarahderouin.com), Science Writer

Citation: Derouin, S. (2025), Satellite scans can estimate urban emissions, Eos, 106, https://doi.org/10.1029/2025EO250373. Published on 6 October 2025. Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Planets Might Form When Dust “Wobbles” in Just the Right Way

Mon, 10/06/2025 - 12:52

To start forming a planet, you need a big disk of dust and gas…and a bit of oomph. We see this formation taking place in protoplanetary disks in young star systems, and the same process must have formed the planets in our own solar system, too.

How do you begin planet formation inside a disk? What is the oomph?

But how do you begin planet formation inside a disk? What is the oomph?

A new set of experiments led by Yin Wang at the Princeton Plasma Physics Laboratory (PPPL) in New Jersey suggests a process called magnetic rotational instability (MRI) may be a contributing factor. MRI describes how magnetic fields interact with the rotating, electrically charged gas in a star’s disk.

MRI has long been thought to play a role in disks by pushing charged gas toward young stars, which consolidate it in a process called accretion. This new research shows MRI can also trigger “wobbles” in the protoplanetary disk that begin the planet formation process.

Taking Metals for a Spin

Traditional ways of accreting dust in a young disk include pressure bumps, said Thanawuth Thanathibodee, an astrophysicist at Chulalongkorn University in Thailand who was not involved in the new research. The bumps are caused by processes such as “the transition between the gas phase and solid phase of some molecules…. When you have a pressure bump, you can accumulate more solid mass, and from there start forming a planet.”

Wang’s paper shows another way the accretion process might begin.

In his team’s experiments at PPPL, a cylinder was placed inside another cylinder, separated by about 32 liters (8.4 gallons) of the liquid metal Galinstan, the brand name of an alloy of gallium, indium, and tin. By spinning the two cylinders at different speeds exceeding 2,000 rotations per minute, scientists churned the liquid metal in a washing machine–like fashion, causing it to swirl through the cavity and mimic how gas swirls in a young star’s disk.

The team measured changes in the magnetic field of the Galinstan as it moved around the cylinders. They found that some regions of the liquid metal would interface, forming what are known as free shear layers. In these layers, some parts slow down and some speed up, a hallmark attribute of MRI.

In a protoplanetary disk, similar layers arise where different parts of the disk’s gas flow meet. These interfaces cause turbulence that pushes material (dust) toward or away from the star and create pockets where dust can accumulate and eventually form planets.

Wang said his work shows MRI-induced wobbling might be happening more often than expected, suggesting “there might be more planets across the universe.”

The work was published in Physical Review Letters earlier this year.

Building on a Successful Experiment

The contribution of MRI to protoplanetary disk formation was previously proposed but was not shown experimentally until now. As such, Thanathibodee said the new work is “very interesting.”

In future experiments, Wang hopes to try different rotation speeds to better understand the free shear layers and examine how MRI is produced. “We’ve found this mechanism is way easier [than thought], but the explored parameter space is still limited,” he said.

Still, MRI isn’t a slam dunk explanation for planet formation. To make the magnetic fields that MRI relies on, the central star must ionize the swirling gas in a protoplanetary disk into a plasma, a process that likely takes place near the star itself. But material close to the star quickly falls onto the star and thus is unavailable to make planets.

If the process instigated by MRI is encountered too close to the star, the researchers found, “the material will be absorbed,” explained Wang. “But if this mechanism happens away from the star, then it helps planet formation.”

MRI must work more quickly than the accretion timescale if it contributes to protoplanetary disk formation, but by how much?

“Nature is complicated, but what our results show is this instability is likely more common than we used to think.”

“My sense is that in order for some planets to form, this [MRI] process needs to be prolonged,” said Thanathibodee. “Otherwise, all the mass will get accreted in a short timescale.”

If MRI does occur in a “sweet region” not too close to or not too far from the young star, said Wang, it could play a role in planet formation. “It’s a plausible candidate for explaining a solar system like ours,” he said. “Nature is complicated, but what our results show is this instability is likely more common than we used to think.”

This same process might drive accretion around black holes too, said Wang, where magnetic fields are much stronger.

—Jonathan O’Callaghan (@astrojonny.bsky.social), Science Writer

Citation: O’Callaghan, J. (2025), Planets might form when dust “wobbles” in just the right way, Eos, 106, https://doi.org/10.1029/2025EO250372. Published on 6 October 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

A late monsoon sting in the tale in the Himalayas

Mon, 10/06/2025 - 07:25

Very heavy rainfall across Nepal, NE. India and Bhutan has triggered landslides that have killed at least 60 people.

Over the last few days, parts of the Himalayas have been hit by very high levels of rainfall, causing large numbers of damaging landslides. The picture is not yet fully clear, but Nepal and Bhutan, and Darjeeling in India, have been particularly badly hit.

Over on the wonderful Save the Hills blog, Praful Rao has documented the rainfall at in Darjeeling – for example, on 4 October 2025 Kurseong recorded 393 mm of rainfall, whilst in Kalimpong a peak intensity of about 150 mm per hour was recorded. The scale of this event is well captured by the Global Precipitation Measurement dataset from NASA – this is 24 hour precipitation to 14:30 UTC on 5 October 2025:-

24 hour precipitation in 14:30 on 5 October 2025 for South Asia. Data from NASA.

News reports from Nepal indicate that 47 people have been killed and more are missing. Of these fatalities, 37 are reported to have been the result of landslides in Ilam. The Kathmandu Post has started to document the events:-

“According to the District Administration Office, five people died in Suryodaya Municipality, six in Ilam Municipality, six in Sandakpur Rural Municipality, three in Mangsebung, eight in Maijogmai, eight in Deumai Municipality, and one in Phakphokthum Rural Municipality. Among the deceased are 17 men and 20 women, including eight children, the office said in its official report.”

The picture in NE India is also dire. In Darjeeling, a series of landslides have killed 23 people. These include 11 fatalities in Mirik and five in Nagrakata. Praful Rao has indicated that he will provide more detail on the landslides in Darjeeling on the Save the Hills blog in due course.

The rains have also caused extensive damage in Bhutan. At least five fatalities have been reported, mostly in “flash floods”. In this landscape, the term flash flood is usually used to describe channelised debris flows.

Of great concern is the reported situation at the Tala Hydroelectric Power Station dam on the Wangchu river in the Chukha district of Bhutan. Reports indicate that water has overflowed the structure due to a failure of the dam gates. According to Wikipedia, this dam is 92 metres tall, so a collapse would be a significant event. This is Bhutan’s largest hydropower facility, and dams are not usually designed to withstand a major overtopping event.

The situation across this region will be unclear for a while, but loyal readers will remember the late monsoon event in Nepal in 2024, in which over 200 people were killed. These events reflect changes in patterns of rainfall associated with anthropogenic climate change and changes in the pattern of vulnerability associated with poor development and construction activities. Neither are likely to improve in the next decade and beyond.

Return to The Landslide Blog homepage Text © 2023. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

The AI Revolution in Weather Forecasting Is Here

Fri, 10/03/2025 - 13:01

Weather forecasting has become essential in modern life, reducing weather-related losses and improving societal outcomes. Severe weather alerts provide vital early warnings that help to protect life and property. And forecasts of temperatures, precipitation, wind, humidity, and other conditions—both extreme and average—support public safety, health, and economic prosperity by giving everyone from farmers and fishers to energy and construction companies a heads-up on expected weather.

However, not all forecasts are created equal, in part because weather prediction is chaotic, meaning small uncertainties in the initial conditions (data) input into weather models can lead to vastly different predicted outcomes. The accuracy of predictions is also affected by the complexity of models, the realism with which atmospheric conditions are represented, how far into the future weather is being forecast, and—at very resolved scales—local geography.

The application of novel artificial intelligence (AI) is providing the latest revolutionary influence on weather forecasting.

The skill and reliability of weather forecasts have steadily improved over the past century. In recent decades, improvements have been facilitated by advances in numerical weather prediction (NWP), growth in computing power, and the availability of more and better datasets that capture Earth’s physical conditions more frequently. The application of novel artificial intelligence (AI) is providing the latest revolutionary influence on forecasting. This revolution is borne out by trends in the scientific literature and in the development of new AI-based tools with the potential to enhance predictions of conditions hours, days, or weeks in advance.

Making the Models

All weather forecasts involve inputting data in the form of observations—readings from weather balloons, buoys, satellites, and other instruments—into models that predict future states of the atmosphere. Model outputs are then transformed into useful products such as daily weather forecasts, storm warnings, and fire hazard assessments.

Current forecasting methods are based on NWP, a mathematical framework that models the future of the atmosphere by treating it as a fluid that interacts with water bodies, land, and the biosphere. Models using this approach include the European Centre for Medium-Range Weather Forecasts’ (ECMWF) Integrated Forecasting System (IFS) model (widely considered the gold standard in modern weather forecasting), the National Center for Atmospheric Research’s Weather Research and Forecasting model, and NOAA’s Global Forecasting System.

NWP models solve fluid dynamics equations known as the Navier-Stokes equations that simplify the complex motions of fluids, such as air in the atmosphere, and can be used to describe relationships among their velocities, temperatures, pressures, and densities. The result is a set of predictions of what, for example, temperatures will be at given places at some point in the future. These predictions, together with estimates of other simplified physical processes not captured by fluid dynamics equations, make up a weather forecast.

This conceptually simple description obscures the massive scale of the work that goes into creating forecasts (Figure 1). Operating satellites, radar networks, and other necessary technology is expensive and requires substantial specialized expertise. Inputting observations from these disparate sources into models and getting them to work together harmoniously—no easy task—are a field of study unto themselves.

Fig. 1. An enormous amount of work and expertise go into producing weather forecasts. In brief, observations from multiple sources are combined and used to inform forecasting models, and the resulting model outputs are converted into forecasts that are communicated to the public. Artificial intelligence (AI) can be applied in many ways through this process.

Furthermore, forecast models are complicated and require some of the most powerful—not to mention expensive and energy-intensive—supercomputers in the world to function. Expert meteorologists are required to interpret model outputs, and communications teams are needed to translate those interpretations for the public.

The input-model-output structure for forecasting will be familiar to students of computer science. Indeed, the two fields have, in many ways, grown up together. The Navier-Stokes approach to weather forecasting first became truly useful when computing technology could produce results sufficiently quickly beginning in the 1950s and 1960s—after all, there is no point in having a forecast for 24 hours from now if it takes 36 hours to make!

The Rise of Machine Learning

As the power of computing hardware and software has increased, so too have the accuracy, resolution, and range of forecasting. The advent of early AI systems in the 1950s, which weather services adopted almost immediately, fed this advancement through the mid-20th century. These early AIs were hierarchical systems that mimicked human decisionmaking through decision trees comprising a series of “if this, then that” logic rules.

Interest in AI among forecasters picked up starting in the late 1990s and grew steadily into the 2000s and 2010s as computing resources became more powerful and useful data became more widely available.

The development of decision trees was followed by the emergence of machine learning (ML), a subdiscipline of AI involving training models to perform specific tasks without explicit programming. Instead of following coded instructions, these models learn from patterns in datasets to improve their performance over time. One method to achieve this improvement is to train a neural network, an algorithm said to be inspired by the human brain. Neural networks work by iteratively processing numerical representations of input data—image pixel brightnesses, temperatures, or wind speeds, for example—through multiple layers of mathematical operations to reorganize and refine the data until a meaningful output is obtained.

Even though experiments with ML have been ongoing within the wider scientific community since the 1970s, they initially failed to catch on as much more than a novelty in weather forecasting. AI systems at the time were limited by the computing power and relevant data available for use in ML. However, interest in AI among forecasters picked up starting in the late 1990s and grew steadily into the 2000s and 2010s as computing resources became more powerful and useful data became more widely available.

Model training methods also grew more efficient, and new ideas on how to adapt the original neural network concept created opportunities to tackle more complicated tasks. For example, 2010 saw the release of ImageNet, a huge database of labeled images that could be used to train AIs for 2D image recognition tasks.

Machine Learning Moves into Weather Forecasting

Weather forecasting is feeling the impact of this innovation. The growth of AI in research on nowcasting—forecasts of conditions a couple of hours in advance—and short-range weather forecasting up to a day or two out helps to reveal how.

We informally surveyed studies published between 2011 and 2022 using the Web of Science database and found that most of this research focused on applying AI to studies of classical weather forecast variables: precipitation, clouds, solar irradiation, wind speed and direction, and temperature (Figure 2).

Fig. 2. The number of newly published scientific studies concerning the use of AI in nowcasting or short-range weather forecasting grew substantially from 2011 to 2022. In this plot, the studies are divided according to their focus on five variables of interest. Credit: Authors; data included herein are derived from Clarivate’s Web of Science database (©Clarivate 2024. All rights reserved.)

The annual growth of new publications related to these five forecast variables indicates startling year-over-year growth averaging 375% over this period. This nearly fivefold annual increase is split about evenly across each variable: In 2010, the numbers of new publications addressing each of these variables were in the low single digits; by 2022, the numbers for each were in the hundreds.

Research in just a few nations drove most of this growth. Roughly half the papers published from 2011 to 2022 emerged from China (27.5%) and the United States (22.7%). India (~8%), Germany (~6.5%), and the United Kingdom and Australia (~5% each) also contributed significantly. Most, if not all, of this research output appears to be linked to interest in its relevance for or application to various economic sectors traditionally tied to weather forecasting, such as energy, transportation, and agriculture.

Fig. 3. The five most popular variables (left) are matched (by keyword association) to major economic sectors. Credit: Diagram created by authors using SankeyMATIC; data included herein are derived from Clarivate’s Web of Science database (©Clarivate 2024. All rights reserved.)

We determined links in the published studies by associating keywords from these sectors with the five forecast variables (Figure 3). This approach has limitations, including potential double counting of studies (e.g., because the same AI model may have multiple uses), not accounting for the relative sizes of the sectors (e.g., larger sectors like energy are naturally bigger motivators for research than smaller ones like fisheries), and not identifying proprietary research and models not released to the public. Nonetheless, the keyword associations reveal interesting trends.

For example, applications in the energy sector dominate AI forecasting research related to solar irradiance and wind. Comprehensive reviews have covered how AI technologies are being integrated into the energy industry at many stages in the supply chain. Choosing and planning sites (e.g., for solar or wind farms), management of solar and wind resources in day-to-day operations, predictive maintenance, energy demand matching in real time, and management of home and business consumers’ energy usage are all use cases in which AI is affecting the industry and driving research.

Applications in the agricultural sector are primarily driving research into temperature and precipitation forecasting.

Meanwhile, applications in the agricultural sector are primarily driving research into temperature and precipitation forecasting. This trend likely reflects the wider movement in the sector toward precision agriculture, a data-driven approach intended to boost crop yields and sustainability. Large companies, such as BASF, have promoted “digital farming,” which combines data sources, including forecasts and historical weather patterns, into ML models to predict future temperatures and precipitation. Farmers can then use these predictions to streamline operations and optimize resource usage through decisions about, for example, the best time to fertilize or water crops.

The construction industry, a significant driver of temperature forecasting research using AI, relies on temperature forecasts to plan operations. Weather can substantially influence project durations by affecting start dates and the time required for tasks such as pouring concrete. Accurate forecasts can also improve planning for worker breaks on hot days and for anticipating work stoppages during hard freezes.

In the transportation and aviation sectors, public safety concerns are likely driving AI-aided forecasting research. Intelligent transportation systems rely on weather forecast data to predict and mitigate road transportation problems through diversions or road and bridge closures. Similarly, accurate weather data can power aviation models to improve safety and comfort by, for example, predicting issues such as turbulence and icing.

Evolving Architectures

The methods and structures, or architectures, used in AI-based forecasting research have changed and grown more sophisticated as the field has advanced, particularly over the past decade (Figure 4). And this trajectory toward improvement appears to be accelerating.

Fig. 4. Significant growth and change in the AI/machine learning architectures used in the scientific literature on nowcasting or short-range weather forecasting occurred between 2011 and 2022. RNN = recurrent neural network; CNN = convolutional neural network; GAN = generative adversarial network; SVM = support vector machine; ELM = extreme learning machine. Credit: Authors; data included herein are derived from Clarivates Web of Science database (©Clarivate 2024. All rights reserved.)

In 2015, roughly 40% of AI models in the literature for nowcasting and short-range weather forecasting were support vector machines, but by 2022, this figure declined to just 8%. Over the same period, the use of more sophisticated convolutional neural networks ballooned from 11% to 43%. Newer architectures have also emerged for forecasting, with generative adversarial networks, U-Net, and transformer models gaining popularity.

Transformers, with their powerful attention mechanisms that detect long-range dependences among different variables (e.g., among atmospheric conditions and the formation of storms), may be on a course to become the preferred architecture for weather forecasting. Transformers have been widely adopted in other domains and have become synonymous with AI in general because of their prominent use in generative AI tools like OpenAI’s ChatGPT.

Some of today’s most advanced weather forecasting models make use of transformer models, rather than being based on numerical weather prediction.

Some of today’s most advanced weather forecasting models make use of transformer models, such as those from NVIDIA (FourCastNet), Huawei (Pangu-Weather), and Google (GraphCast), each of which is data driven, rather than being based on NWP. These models boast levels of accuracy and spatial resolution similar to ECMWF’s traditional IFS model across several important weather variables. However, their major innovation is in the computing resources required to generate a forecast: On the basis of (albeit imperfect) comparisons, NVIDIA estimates, for example, that FourCastNet may be up to 45,000 times faster than IFS, which equates to using 12,000 times less energy.

A View of the Future

Combining high-resolution data from multiple sources will be core to the weather forecasting revolution, meaning the observational approaches used to gather these data will play a central role.

Sophisticated AI architectures are already being used to combine observations from different sources to create new products that are difficult to create using traditional, physics-based methods. For example, advanced air quality forecasting tools rely on combining measurements from satellites and monitoring stations and ground-level traffic and topography data to produce realistic representations of pollutant concentrations. AIs are also being used for data assimilation, the process of mapping observations to regularly spaced, gridded representations of the atmosphere for use in weather forecast models (which themselves can be AI driven).

Another growing use case for AI is forecasting extreme weather. Extreme events can be challenging for AI models to predict because many models function by searching for patterns (i.e., averages) in data, meaning rarer events are inherently weighted less. Researchers have suggested that the most state-of-the-art AI weather forecasts have significantly underperformed traditional NWP counterparts in predicting extreme weather events, especially rare events such as category 5 hurricanes. However, improvements are in the works. For example, compared with traditional methods, Microsoft’s Aurora model boasts improved accuracy for Pacific typhoon tracks and wind speeds during European storms.

Whether scientists are using fully data-driven AI or so-called hybrid systems, which combine AI and traditional atmospheric physics models, predictions of weather events and of likely outcomes of those events (e.g., fires, floods, evacuations) need to be combined reliably and transparently. One example of a hybrid system blending physics and AI elements is Google’s Flood Hub, which integrates traditional modeling with AI tools to deliver early extreme flood warnings freely in 80 countries. Such work is an important part of the United Nations’ Early Warnings for All initiative, which aims to ensure that all people have actionable access to warnings and information about natural hazards.

Observations will play a key role in facilitating the growing accuracy and efficiency of new forecasting products by providing the data needed to train AI models.

Observations will play a key role in facilitating the growing accuracy and efficiency of new forecasting products by providing the data needed to train AI models. Today, models are generally pre-trained before use with a structured dataset generated from data assimilation methods. These pre-trained systems could be tailored to new specialized tasks at high resolution, such as short-range forecasting for locations where conditions change rapidly, like in high mountain ranges.

Satellites, such as the recently launched Geostationary Operational Environmental Satellite 19 (GOES-19) and NOAA-21 missions, have been an increasingly critical source of data for training AI. These data will soon be supplemented with even higher-resolution observations from next-generation satellite instruments such as the European Organisation for the Exploitation of Meteorological Satellites’ (EUMETSAT) recently launched Meteosat Third Generation (MTG) and EUMETSAT Polar System – Second Generation (EPS-SG) programs. NOAA’s planned Geostationary Extended Observations (GeoXO) and Near Earth Orbit Network (NEON) programs will further boost both traditional and AI modeling.

Looking farther ahead, some experiments have attempted to fully replace traditional data assimilation systems, moving directly from observations to gridded forecast model inputs. A natural end point could be a fully automated, end-to-end weather forecast system, potentially with multiple models working together in sequence. Such a system would process observations into inputs for forecast models, then run those models and process forecast outputs into useful products.

The effects of the AI revolution are beginning to be felt across society, including in key sectors of the economy such as energy, agriculture, and transportation. For weather forecasting, AI technology has the potential to streamline observational data processing, use computational resources more efficiently, improve forecast accuracy and range, and even create entirely new products. Ultimately, current technologies and coming innovations may save money and help better protect lives by seamlessly delivering faster and more useful predictions of future conditions.

Author Information

Justin Shenolikar (justin.shenolikar@iup.uni-heidelberg.de), European Organisation for the Exploitation of Meteorological Satellites, Darmstadt, Germany; now at Universität Heidelberg, Germany; and Paolo Ruti and Chris Yoon Sang Chung, European Organisation for the Exploitation of Meteorological Satellites, Darmstadt, Germany

Citation: Shenolikar, J., P. Ruti, and C. Y. S. Chung (2025), The AI revolution in weather forecasting is here, Eos, 106, https://doi.org/10.1029/2025EO250363. Published on 3 October 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

The evolution of the Matai’an landslide dam

Fri, 10/03/2025 - 07:10

Some excellent before and after imagery is now available showing the evolution of the Matai’an landslide dam.

The active GIS/spatial analysis community in Taiwan has produced some fascinating analysis of the Matai’an landslide. Much of this has been posted to Facebook (which is not my favourite platform, but sometimes you have to go where the information resides).

Tony Lee has produced an incredibly interesting comparison of the dam before and after the overtopping and breach event, based upon imagery captured before the event on 18 August and after the event on 25 September. Unfortunately, WordPress really doesn’t like Facebook embeds, so you’ll need to follow this link:

Tony Lee Facebook post

This is a still from the video:-

Before and after images of the Matai’an landslide dam. Video by Tony Lee, posted to Facebook.

The depth and scale of the incision is very clear – the flow clearly rapidly cut into and eroded into the debris. It has left very steep slopes on both sides in weak and poorly consolidated materials.

So, a very interesting question will now pertain to the stability of these slopes. How will they perform in conditions of intense rainfall and/or earthquake shaking? Is there the potential for a substantial slope failure on either side, allowing a new (enlarged) lake to form.

This will need active monitoring (InSAR may well be ideal). The potential problems associated with the Matai’an landslide are most certainly not over yet.

Return to The Landslide Blog homepage Text © 2023. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Old Forests in the Tropics Are Getting Younger and Losing Carbon

Thu, 10/02/2025 - 13:10

The towering trees of old forests store massive amounts of carbon in their trunks, branches, and leaves. When these ancient giants are replaced by a younger cohort after logging, wildfire, or other disturbances, much of this carbon stock is lost.

“We wanted to actually quantify what it means if an old forest becomes young.”

“We’ve known for a long time that forest age is a key component of the carbon cycle,” said Simon Besnard, a remote sensing expert at the GFZ Helmholtz Centre for Geosciences in Potsdam, Germany. “We wanted to actually quantify what it means if an old forest becomes young.”

The resulting study, published in Nature Ecology and Evolution, measured the regional net aging of forests around the world across all age classes between 2010 and 2020, as well as the impact of these changes on aboveground carbon.

To do this, the team developed a new high-resolution global forest age dataset based on more than 40,000 forest inventory plots, biomass and height measurements, remote sensing observations, and climate data. They combined this information with biomass data from the European Space Agency and atmospheric carbon dioxide observations.

The results point to large regional differences. While forests in Europe, North America, and China have aged during this time, those in the Amazon, Southeast Asia, and the Congo Basin were younger in 2020 than 10 years prior.

A number of recent studies have shown that forests are getting younger, but the new analysis quantifies the impact of this shift on a global level, said Robin Chazdon, a tropical forest ecologist at the University of the Sunshine Coast in Queensland, Australia, who was not involved in the study. “That’s noteworthy and a very important concept to grasp because this has global implications, and it points out where in the world these trends are strongest.”

Carbon Impact

The study identifies the tropics, home to some of the world’s oldest forests, as a key region where younger forests are replacing older ones.

In this image from 2020, old-growth forests are most evident in tropical areas in South America, Africa, and Southeast Asia. Credit: Besnard et al., 2021, https://doi.org/10.5194/essd-13-4881-2021, CC BY 4.0

On average, forests that are at least 200 years old store 77.8 tons of carbon per hectare, compared to 23.8 tons per hectare in the case of forests younger than 20 years old.

The implications for carbon sequestration are more nuanced, however. Fast-growing young forests, for instance, can absorb carbon much more quickly than old ones, especially in the tropics, where the difference is 20-fold. But even this rate of sequestration is not enough to replace the old forests’ carbon stock.

Ultimately, said Besnard, “when it comes to a forest as a carbon sink, the stock is more important than the sink factor.”

“It’s usually more cost-, carbon-, and biodiversity-effective to keep the forest standing than it is to try to regrow it after the fact.”

In the study, only 1% of the total forest area transitioned from old to young, primarily in tropical regions. This tiny percentage, however, accounted for more than a third of the lost aboveground carbon documented in the research— approximately 140 million out of the total 380 million tons.

“It’s usually more cost-, carbon-, and biodiversity-effective to keep the forest standing than it is to try to regrow it after the fact. I think this paper shows that well,” said Susan Cook-Patton, a reforestation scientist at the Nature Conservancy in Arlington, Va., who was not involved in the study. “But we do need to draw additional carbon from the atmosphere, and putting trees back in the landscape represents one of the most cost-effective carbon removal solutions we have.”

The increased resolution and details provided by the study can help experts better understand how to manage forests effectively as climate solutions, she said. “But forest-based solutions are not a substitute for fossil fuel emissions reductions.”

Open Questions

When carbon stored in trees is released into the atmosphere depends on what happens after the trees are removed from the forest. The carbon can be stored in wooden products for a long time or released gradually through decomposition. Burning, whether in a forest fire, through slash-and-burn farming, or as fuel, releases the carbon almost instantly.

“I think there is a research gap here: What is the fate of the biomass being removed?” asked Besnard, pointing out that these effects have not yet been quantified on a global scale.

Differentiating between natural, managed, and planted forests, which this study lumps together, would also offer more clarity, said Chazdon: “That all forests are being put in this basket makes it a little bit more challenging to understand the consequences not only for carbon but for biodiversity.”

She would also like to see future research on forest age transitions focus on issues beyond carbon: “Biodiversity issues are really paramount, and it’s not as easy to numerically display the consequences of that as it is for carbon.”

“We are only looking at one metric, which is carbon, but a forest is more than that. It’s biodiversity, it’s water, it’s community, it’s many things,” agreed Besnard.

—Kaja Šeruga, Science Writer

Citation: Šeruga, K. (2025), Old forests in the tropics are getting younger and losing carbon, Eos, 106, https://doi.org/10.1029/2025EO250369. Published on 2 October 2025. Text © 2025. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

机器学习模拟千年气候

Thu, 10/02/2025 - 13:10
Source: AGU Advances

This is an authorized translation of an Eos article. 本文是Eos文章的授权翻译。

近年来,科学家们发现,基于机器学习的天气模型可以比传统模型更快地做出天气预测,且使用更少的能耗。然而,许多这些模型无法准确预测未来15天以上的天气,并且到第 60 天时就会开始模拟出不切实际的天气。

深度学习地球系统模型(Deep Learning Earth System Model,简称DLESyM)建立在两个并行运行的神经网络上:一个模拟海洋,另一个模拟大气。在模式运行期间,对海洋状况的预测每四个模式日更新一次。由于大气条件演变得更快,对大气的预测每12个模式小时更新一次。

该模型的创建者Cresswell-Clay 等人发现,DLESyM 与过去观测到的气候非常吻合,并能做出准确的短期预测。以地球当前的气候为基准,它还可以在不到 12 小时的计算时间内,准确模拟 1000 年周期内的气候和年际变化。它的性能通常与基于耦合模式比对计划第六阶段(CMIP6)的模型相当,甚至优于后者,CMIP6目前在计算气候研究中被广泛使用。

DLESyM 模型在模拟热带气旋和印度夏季季风方面优于 CMIP6 模型。它至少与 CMIP6 模型一样准确地捕捉了北半球大气“阻塞”事件的频率和空间分布,而这些事件可能导致极端天气。此外,该模型预测的风暴也非常真实。例如,在 1000 年模拟结束时(3016 年)生成的东北风暴的结构与 2018 年观测到的东北风暴非常相似。

然而,新模型和CMIP6 模型都无法很好地描述大西洋飓风 的气候特征。此外,对于中期预报(即未来 15 天左右的预报),DLESyM 的准确性低于其他机器学习模型。尤其重要的是,DLESyM 模型仅对当前气候进行模拟,这意味着它没有考虑人类活动引起的气候变化。

作者认为,DLESyM模型的主要优势在于,它比运行CMIP6 模型所需的计算成本要低得多,这使得它比传统模型更容易使用。(AGU Advances, https://doi.org/10.1029/2025AV001706, 2025)

—科学撰稿人Madeline Reinsel

This translation was made by Wiley. 本文翻译由Wiley提供。

Read this article on WeChat. 在微信上阅读本文。

Text © 2025. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer