EOS

Syndicate content Eos
Science News by AGU
Updated: 15 hours 20 min ago

Oozing Gas Could Be Making Stripes in Mercury’s Craters

Thu, 02/12/2026 - 14:30

Bright streaks of material trickle down the slopes of many of Mercury’s craters, but scientists have struggled to understand how these geologically young features, called slope lineae, appeared on a seemingly dead world. Now, researchers have used machine learning to analyze more than 400 slope lineae in the hope of understanding the streaks’ origin.

The analysis of images from NASA’s decade-gone MESSENGER (Mercury Surface, Space Environment, Geochemistry, and Ranging) mission showed that lineae seem to stream from bright hollows on the sunward side of crater slopes and mainly appear on craters that punched through a thin volcanic crust to a volatile-rich layer beneath. The lineae, the team theorized, could have formed when that exposed layer heated up and released volatiles like sulfur to drip downslope.

“We have these modern data science approaches now—machine learning, deep learning—that help us look into all those old data sets and find completely new science discoveries in them,” said Valentin Bickel, a planetary geomorphologist at Universität Bern in Switzerland and lead researcher on the study.

Streaks and Stripes

MESSENGER orbited Mercury from 2011 to 2015, and observations from those 4 years remain some of the best data we have on our solar system’s smallest planet.

The images revealed that although there is not a lot of geologic activity happening today, the planet remains chock-full of oddities.

One of those strange phenomena is the existence of slope lineae streaking down from the rims of many of Mercury’s craters. The higher-resolution MESSENGER images show that Mercury’s lineae are made of bright material and are geologically young, with crisply defined edges and no small craters superimposed on top. But planetary scientists had not conducted any systematic analysis of lineae before now, focusing instead on understanding the planet’s similarly bright, but more numerous, hollows.

“The first things we as geologists like to do is put things on a map.”

Bickel and his team sought to fill that knowledge gap. Their machine learning tool looked at more than 112,000 MESSENGER images with spatial resolutions finer than 150 meters (492 feet), identified 402 individual lineae, and cataloged their properties in a uniform way.

“The first things we as geologists like to do is put things on a map,” Bickel said.

Most of MESSENGER’s high-resolution images cover the northern hemisphere, Bickel explained, so most (93%) of the lineae the team cataloged were in the north. Ninety percent of lineae are located within craters. They are hundreds or thousands of meters long, are less than 20 meters (65 feet) tall, and are located on steeper-than-average crater slopes. Most lineae extend from young, bright hollows or hollow-like features.

But the most telling commonality among lineae is that they prefer the side of craters facing the equator, which is the side that receives the most sunlight.

The MESSENGER mission imaged slope lineae in Mercury’s craters on 1 August 2012 (left) and 19 October 2013 (right). Credit: NASA/JHUAPL/Carnegie Institution of Washington

That trend led the researchers to their theory of how lineae form. An impact exposes Mercury’s shallow but volatile-rich bedrock layer. Insolation, or heat from the Sun, draws out volatile gases in those rocks, and those volatiles then slowly drip down the crater wall, leaving bright deposits behind.

“The fact that lineae are on slopes that are facing the Sun implies that insolation might play a role in activating the process,” Bickel said. “And whenever insolation is so prominent, that implies that volatile material is involved. And in Mercury’s case has to come from the subsurface.”

The team published these results in Communications Earth and Environment.

Making a More Complete Map

Susan Conway, a planetary geomorphologist at the French National Centre for Scientific Research (CNRS) in Nantes, France, said planetary scientists have long accepted that Mercury’s hollows are produced by the loss of subsurface volatiles.

“Given that the slope lineae often originate at what appear to be hollows on the crater wall and have the same colour as them, the inference that slope lineae are also linked to volatile loss makes sense,” Conway wrote in an email.

Across the solar system, “slope lineae are pretty common,” added Conway, who was not involved with this research. “Several different kinds have been documented on Mars—slope streaks believed to be dust avalanches, recurring slope lineae whose formation is still debated and could be related to volatiles.” Granular flows on the Moon as well as lineae on Ceres and some icy moons in the outer solar system also resemble those on Mercury.

But a good 10% of Mercury’s known lineae don’t appear within craters, and conversely, there are plenty of craters with hollows that don’t have lineae. Other mechanisms are likely at work there, Bickel said.

“BepiColombo will image the whole surface at a resolution that would enable us to see most slope lineae.”

Thankfully, planetary scientists won’t have to wait long to test this theory. The BepiColombo spacecraft will arrive at Mercury in November and will begin science operations in early 2027. The joint mission from the European Space Agency and the Japan Aerospace Exploration Agency will image more of the planet’s surface than MESSENGER did and at a consistently higher spatial resolution.

Bickel and other Mercury scientists expect that BepiColombo will image more slope lineae across the planet, including smaller lineae, dimmer lineae, and lineae at southern latitudes. It will likely reimage some lineae-dense locations and reveal whether the streaks have changed in the 16 years since MESSENGER’s last images. And it may even capture repeat snapshots of a few locations, allowing scientists to see whether lineae change on short timescales.

“BepiColombo will image the whole surface at a resolution that would enable us to see most slope lineae,” Conway said. “We’ll get a complete picture of their spatial distribution, which will enable us to better test the volatile-driven hypothesis.”

—Kimberly M. S. Cartier (@astrokimcartier.bsky.social), Staff Writer

Citation: Cartier, K. M. S. (2026), Oozing gas could be making stripes in Mercury’s craters, Eos, 107, https://doi.org/10.1029/2026EO260052. Published on 12 February 2026. Text © 2026. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Sediments Offer an Extended History of Fast Ice

Thu, 02/12/2026 - 14:29

Fast ice, also called landfast sea ice, is a relatively short-lived ice that forms from frozen seawater and attaches like a “seatbelt” to larger ice sheets. It can create 50- to 200-kilometer-wide bands that last anywhere from a few weeks to a few decades and act as a site for valuable geochemical processes, breeding grounds for emperor penguins, and a protective buffer between caustic Antarctic winds and waters and inland bodies of ice.

In new research published in Nature Communications, scientists found that buried sediments can track the long-term growth of Antarctic fast ice—and that the ice’s freezing and thawing may be linked to cycles of solar activity. Given that this ice plays a significant role in protecting Antarctica’s larger ice sheets, the research could have major implications for understanding the ongoing impacts of climate change in Antarctica.

“Fast ice, especially in the summertime, is suffering the same fate as overall pack ice,” said Alex Fraser, a glaciologist at the University of Tasmania, who was not involved in the study. We’ve seen a “dramatic decrease” over the past decade, he said. “We’re down to around half of the ‘normal’ [amount].”

“To understand how humans are changing the planet, we first need to know how the planet changes on its own.”

Over the past several decades, the only way for scientists to track fast ice has been through satellite data, which can reveal the ice’s history over only the past 40 or so years. This narrow range has prohibited researchers from understanding the ice’s behavior prior to human-induced climate change.

“To understand how humans are changing the planet, we first need to know how the planet changes on its own,” said Mike Weber, a geoscientist at Universität Bonn in Germany and a coauthor of the study. The new work aimed to establish a “blueprint” for how fast ice behaves in the long term, allowing researchers to better understand how the ice contracts or expands when exposed to greenhouse gas emissions.

Sediment Secrets

To better understand fast ice history, the team turned to sediment cores from Victoria Land in eastern Antarctica. By scrutinizing laminated layers within the cores, the researchers were able to pinpoint key markers that correspond to ebbs and flows in fast ice going back 3,700 years.

The team found that lighter sediment layers formed during summer months marked by prolonged ice loss, whereas darker layers formed during regular seasonal thawing. They also found evidence that different species of small organisms called diatoms grew during summer months versus thawing periods, further enabling the science team to distinguish the cycles. By combining these and other data unearthed from the sediments, the researchers identified recurring periods of open-water and low-ice conditions pinned to solar cycles—called the Gleissberg and Suess-de Vries solar cycles—that occur approximately every 90 and 240 years, respectively.

The link to solar cycling was surprising at first, but the researchers suggested the explanation is straightforward: Solar activity can influence winds over the Southern Ocean, transporting warm air over the Victoria Land coast and leading to ice melt.

“Laminated sediments are always intriguing because you know they’re hiding a message.”

“Laminated sediments are always intriguing because you know they’re hiding a message,” said Tesi Tommaso, a biogeochemist at the National Research Council of Italy’s Institute of Polar Sciences and lead author of the study. “When we realized that over long timescales, this laminated pattern was linked to solar activity, it actually made perfect sense—it was super exciting.”

In future work, the team plans to dig up deeper sediment cores to push fast ice records back even further. The data would be “incredibly informative,” said Tommaso.

“We have finally developed a high-resolution ‘time machine’ for a critical but poorly understood part of Antarctica,” Weber said. “It’s a testament to how interconnected our atmosphere, ocean, and ice really are.”

—Taylor Mitchell Brown (@tmitchellbrown.bsky.social), Science Writer

Citation: Brown, T. M. (2026), Sediments offer an extended history of fast ice, Eos, 107, https://doi.org/10.1029/2026EO260054. Published on 12 February 2026. Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Linking Space Weather and Atmospheric Changes With Cosmic Rays

Thu, 02/12/2026 - 14:00
Editors’ Highlights are summaries of recent papers by AGU’s journal editors. Source: Earth and Space Science

Atmospheric conditions over Antarctica affect global climate cycles, and are thus critical for climate assessment. However, studying atmospheric changes in Antarctica is quite challenging as they are driven by a variety of processes at local scale not easily captured by global models. Monitoring seasonal atmospheric pressure changes is one way to keep track of the evolving Antarctic atmosphere.

Because changes in stratospheric conditions influence the flux of cosmic rays reaching Earth’s surface, Santos et al. [2025] use measurements from a water-Cherenkov cosmic-ray detector, to monitor variations in the 100-hPa geopotential height (about 15 kilometers) over the Antarctic Peninsula. After conducting a thorough statistical analysis of the data, the authors develop a simple model linking surface pressure and cosmic ray count data, validating it against observed ERA5 100-hPa geopotential height reanalysis data. The model is especially accurate in (southern hemisphere) spring, but it performs well also at other times of the year.

With their model, the authors demonstrate that water-Cherenkov cosmic-ray detectors can be reliably used as proxies for atmospheric pressure changes, thus adding a new, simple, and effective tool to monitor and study lower stratospheric dynamics over Antarctica.

Citation: Santos, N. A., Gómez, N., Dasso, S., Gulisano, A. M., Rubinstein, L., Pereira, M., et al. (2025). Cosmic ray counting variability from water-Cherenkov detectors as a proxy of stratospheric conditions in Antarctica. Earth and Space Science, 12, e2025EA004298. https://doi.org/10.1029/2025EA004298

  —Graziella Caprarelli, Editor-in-Chief, Earth and Space Science

Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Earth’s Climate May Go from Greenhouse to Hothouse

Wed, 02/11/2026 - 16:00

Earth systems may be on the brink of long-term, irreversible destabilization, sending our planet on a “hothouse Earth” trajectory, a scenario in which long-term temperatures remain about 5°C (9°F) higher than preindustrial temperatures, according to a new paper.

In the paper, published in One Earth, scientists argue that uncertainties in climate projections mean Earth system components could be at a higher risk than we think of reaching crucial tipping points such as the melting of the Greenland Ice Sheet and the thawing of the world’s permafrost—points of destabilization that, once breached, are irreversible.

“As we move to higher temperatures, we go into higher risk zones,” said Nico Wunderling, a coauthor of the new paper and a climate scientist at the Potsdam Institute for Climate Impact Research and Goethe University Frankfurt, both in Germany. Scientists know higher temperatures will activate interactions between tipping elements, he said.

The new paper “strongly builds” on a 2018 perspective paper linking the possibility of hothouse Earth to tipping points, said Swinda Falkena, a climate scientist at Utrecht University in the Netherlands who was not involved in either publication.

Uncertain Earth Systems

Scientists use climate models—simulations of Earth systems—to project how rising emissions may impact global temperatures, weather patterns, ice sheets, ocean circulation, and more.

But those models are never perfect representations of our planet. Climate models contain uncertainties regarding the sensitivity of Earth systems to increased levels of carbon dioxide and the role of climate feedbacks, including land and ocean carbon sinks. Simulations have particular trouble modeling potential tipping points, such as weakening ocean circulation and the dieback of the Amazon rainforest, and the interactions between them, Wunderling said.

These uncertainties mean it’s virtually impossible to reliably estimate the timing of some tipping points and that some Earth system components could be closer to tipping points than scientists thought.

In recent years, scientists have noticed that the rate of climate change has outpaced some projections. In 2024, for instance, global temperatures briefly reached 1.5°C (2.7°F) above preindustrial levels, surpassing the Paris Agreement target and indicating that Earth is virtually certain to consistently break this limit in the long term. In another example of real climate change outpacing models, exceptionally high temperatures in 2023, 2024, and 2025 led experts at Berkeley Earth, a nonprofit climate research organization, to suggest scientists may need to rethink their analyses of Earth’s warming rate.

“Warming now seems to have accelerated, which is not something we expected,” Falkena said. “That gets us to think, ‘Okay, is there something we’re missing?’”

The paper identifies 16 Earth system components (such as ice sheets, permafrost, and rainforests) that could reach tipping points, 10 of which could accelerate global heating if triggered. These 10 tipping points include the collapse of major ice sheets, the collapse of Arctic sea ice, the loss of mountain glaciers, the abrupt thaw of boreal permafrost, and the dieback of the Amazon rainforest.

The authors point out that these tipping elements are linked and even interact with each other to create feedback loops. For example, melting ice sheets would reduce Earth’s ability to reflect sunlight, amplifying warming. Melting ice sheets could also weaken the Atlantic Meridional Overturning Circulation, or AMOC (an ocean current key to regulating Earth’s temperature), which could cause the conversion of Amazon rainforest (a critical carbon sink) into dry savanna.

A Hothouse Trajectory

The higher Earth’s temperature rises, “the more likely it is to trigger self-amplifying feedbacks.”

If enough of these tipping points are reached, Earth’s climate could be steered toward a hothouse Earth scenario, the authors write. And although there is “no precise answer” to the question of whether humanity is at risk of triggering hothouse Earth, Wunderling said the 1.5°C (2.7°F) limit set by the Paris Agreement was made with tipping point thresholds in mind.

If Earth’s temperature exceeds preindustrial levels by 2°C (3.6°F), then “we certainly run into a high-risk zone for tipping elements,” Wunderling said. The higher Earth’s temperature rises, “the more likely it is to trigger self-amplifying feedbacks.”

One 2024 modeling study showed that Earth had a high risk of breaching at least one of four climate tipping elements—the Greenland Ice Sheet collapse, the West Antarctic Ice Sheet collapse, the AMOC collapse, and a dieback of the Amazon rainforest—if temperatures do not return to below the 1.5°C (2.7°F) mark. (Scientists say the prospect of lowering Earth’s temperatures with new policies or technology after exceeding this mark is slim.)

Falkena said the likelihood of a hothouse Earth trajectory is low, but the fact that such a severe scenario is plausible at all means it’s something worth the world’s concern. As models improve, scientists will be able to better quantify the risk of a hothouse Earth trajectory.

“While averting the hothouse trajectory won’t be easy, it’s much more achievable than trying to backtrack once we’re on it.”

“While averting the hothouse trajectory won’t be easy, it’s much more achievable than trying to backtrack once we’re on it,” said Christopher Wolf, a research scientist at Terrestrial Ecosystems Research Associates, a former postdoctoral scholar at Oregon State University, and a coauthor of the new study, in a press release.

The world hasn’t sufficiently cut down on emissions, though: Earth is on track to warm by about 2.8°C (5.04°F) by 2100. In 2025, global carbon emissions rose by 1.1% compared to 2024 levels, and in the United States, total emissions rose by 2.4%. The level of carbon dioxide in the atmosphere is likely higher than it has been in at least 2 million years, and average global temperatures are likely warmer than at any point in the past 125,000 years, according to the authors.

The uncertainty about when tipping points may be breached, combined with ever-higher global temperatures, should be taken as a reason for urgent action to combat or mitigate climate change, the authors write.

“In order to avoid high-end climate risks, it is necessary to go down to net zero, to mitigate as quickly as we can,” Wunderling said.

—Grace van Deelen (@gvd.bsky.social), Staff Writer

Citation: van Deelen, G. (2026), Earth’s climate may go from greenhouse to hothouse, Eos, 107, https://doi.org/10.1029/2026EO260057. Published on 11 February 2026. Text © 2026. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

The Endangerment Finding is Lost

Wed, 02/11/2026 - 15:18
body {background-color: #D2D1D5;} Research & Developments is a blog for brief updates that provide context for the flurry of news regarding law and policy changes that impact science and scientists today.

Update, 12 February: At a press conference today, President Donald Trump announced that the EPA has revoked the 2009 Endangerment Finding.

Trump said regulations related to the finding were “crippling,” and designed to “facilitate the green new Scam.”

“Effective immediately, we are repealing the ridiculous Endangerment Finding,” he said.

AGU immediately denounced the repeal.

Revoking the finding repeals the EPA’s authority to regulate greenhouse gas emissions and removes all greenhouse gas emissions regulations for vehicles, according to the EPA.

11 February: The Endangerment Finding is a scientific determination made by the EPA that greenhouse gases threaten public health. It is the legal underpinning for major U.S. climate rules under the Clean Air Act. Revoking the finding repeals the EPA’s authority to regulate greenhouse gas emissions and removes all greenhouse gas emissions regulations for vehicles, according to the EPA. 

“I think it’s a historic low, frankly, for EPA to be taking this stance now,” Benjamin DeAngelo, a former EPA official involved in writing the 2009 finding, told POLITICO

Leavitt called the planned finalization the “largest deregulatory action in American history.” She said the repeal of the finding would increase energy affordability and, especially, lower vehicle costs, allegedly saving Americans “$1.3 trillion in crushing regulations.” Businesses and groups prioritizing free markets support the administration’s claim, with the editorial board of the Washington Post writing that rescinding the Endangerment Finding will “end the federal government’s power over cars.”

President Donald Trump and EPA Administrator Lee Zeldin will make the announcement to finalize the repeal on 12 February.

The EPA based its July proposal to revoke the finding on an Energy Department report written by five climate contrarians that downplayed accepted climate science. The National Academies of Sciences, Engineering and Medicine, an independent organization meant to advise the federal government on scientific matters, conducted their own review of the report and found that the 2009 Endangerment Finding was “beyond scientific dispute.”

The science supporting the Endangerment Finding “has only gotten stronger” since 2009, DeAngelo told POLITICO. 

 
Related

In public hearings in August, hundreds of people, including children, scientists, doctors, parents, advocates, and members of Congress, spoke out against the proposal to revoke the Endangerment Finding. Many cited immediate health concerns, worry about the health and safety of future generations, and a fear that the proposal would accelerate environmental degradation.

The move by the EPA will likely be challenged in the courts—which may be one reason the Trump administration has pushed its finalization through so rapidly, according to The New York Times. Legal scholars say the current, conservative-majority Supreme Court is more likely to uphold decisions supporting deregulation while Trump is still in office. 

The administration wants “to not just do what other Republican administrations have done, which is weaken regulations. They want to take the federal government out of the business of regulation, period,” Jody Freedman, director of Harvard Law School’s Environmental and Energy Law Program, told The New York Times.

—Grace van Deelen (@gvd.bsky.social), Staff Writer

These updates are made possible through information from the scientific community. Do you have a story about how changes in law or policy are affecting scientists or research? Send us a tip at eos@agu.org. Text © 2026. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Making a Map to Make a Difference

Wed, 02/11/2026 - 14:26
Source: Community Science

Geographic information system (GIS) maps help researchers, policymakers, and community members see how environmental risks are spread throughout a given region. These types of interactive, layered maps can be used for storytelling, education, and environmental activism. When community members are involved in their use and creation, GIS maps can also be a tool for equity.

Lively et al. outlined a project focusing on mapping the features and flooding risks at and around the Tar Creek Superfund site in Ottawa County, Okla. Ottawa County is home to 10 federally recognized Tribal Nations. Residents have experienced decades of health and environmental harm from the region’s legacy of zinc and lead mining, most of which occurred within the Quapaw Reservation. Although mining ceased in 1970, giant piles of mining waste, mine water discharges, and unstable ground have poisoned residents and made entire towns unlivable. For almost a century, floods have spread these contaminants across downstream communities.

Technical experts and community members with local knowledge worked together to build a GIS map that can be used by community members and leaders. It depicts how floodwaters run through former mining sites, which then ferry toxic waste throughout the region’s creeks and soils.

The map is viewable in various layers that show the locations of different kinds of mining waste, tribal land boundaries, and flood zones designated by the Federal Emergency Management Agency (FEMA). Users can also view layers showing soil types and the locations of aquifers, fault lines, and wells.

Between 2021 and 2023, members of the Local Environmental Action Demanded Agency (LEAD), a community-led organization, connected with GIS professionals through AGU’s Thriving Earth Exchange. This program partners local organizations with volunteer scientists and experts to address environmental or geoscience-related issues in their communities. Many members of the project team contributing to the Tar Creek project were local to the Miami, Okla., region.

Though much of the actual map building was completed by the GIS expert team member, decisions on what to include in each layer of the map were made by LEAD representatives and nonscientist community members. This coproduction defined equity not only by who built or contributed to the map but also by how it is used by the community as a key storytelling tool—helping to educate officials and residents about the ongoing environmental and health risks when flooding occurs in the region. For the team, it was important not to just make the map but also to use it: Production without activism, the researchers said, would make for an unfinished project. (Community Science, https://doi.org/10.1029/2024CSJ000077, 2026)

—Rebecca Owen (@beccapox.bsky.social), Science Writer

Citation: Owen, R. (2026), Making a map to make a difference, Eos, 107, https://doi.org/10.1029/2026EO260035. Published on 11 February 2026. Text © 2026. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Monitoring Ocean Color From Deep Space: A TEMPO Study

Wed, 02/11/2026 - 14:00
Editors’ Highlights are summaries of recent papers by AGU’s journal editors. Source: Earth and Space Science

The color of the oceans is an important diagnostic parameter as it reflects the health of oceans, monitors CO2 variability, and tracks ecosystem changes due to environmental stressors. Remote observations of the ocean color (OC) are routinely performed, but rapid changes in this parameter are difficult to capture. Geostationary platforms are uniquely suited for this purpose, because they monitor the same area and can therefore detect changes in real time. However, measurements of OC from geostationary satellites are not routinely performed.

The Tropospheric Emissions: Monitoring of Pollution (TEMPO) geostationary instrument monitors air quality and pollution over North America. Using a new approach, Fasnacht et al. [2025] apply a combination of statistical and machine learning techniques to TEMPO hyperspectral hourly measurements, and obtain OC values across the USA coastal regions and the Great Lakes.

Thus, the authors demonstrate the feasibility of capturing hourly variability of environmental parameters from deep space. This reinforces the scientific value of future dedicated geostationary ocean color missions, such as the Geosynchronous Littoral Imaging and Monitoring Radiometer (GLIMR), and the Geostationary Extended Observations (GeoXO) Ocean Color Instrument (OCX).  

Citation: Fasnacht, Z., Joiner, J., Bandel, M., Ibrahim, A., Heidinger, A., Himes, M. D., et al. (2025). Exploiting machine learning to develop ocean color retrievals from the tropospheric emissions: Monitoring of pollution instrument. Earth and Space Science, 12, e2025EA004341. https://doi.org/10.1029/2025EA004341

—Graziella Caprarelli, Editor-in-Chief, Earth and Space Science

Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

A Double-Edged Sword: The Global Oxychlorine Cycle on Mars

Tue, 02/10/2026 - 14:20
Editors’ Vox is a blog from AGU’s Publications Department.

The surface of Mars represents the comprehensive geochemical inventory of the interactions between the lithosphere, atmosphere, and/or hydrosphere over a period of more than four billion years. By investigating the chemical composition and variability of surface materials, we can reconstruct the planet’s evolutionary history and investigate how different geological processes shaped the surface environment of Mars over geologic time. Due to their unique properties and global distribution, reactive salts of chlorine, called oxychlorine species, constitute an important component of the Martian surface.

A new article in Reviews of Geophysics investigates the state of the knowledge and discusses potential areas of future exploration for oxychlorine species on Mars. Here, we asked the author to give an overview of oxychlorine species on Mars, how scientists study them, and what questions remain.

Why is it important to understand the composition of the surface environment of Mars?

Certain surface materials can serve as diagnostic indicators of early and contemporary aqueous activity on the Martian surface.

Certain surface materials—such as salts and hydrated minerals—can serve as diagnostic indicators of early and contemporary aqueous activity on the Martian surface. Accurately understanding the formation, evolution, and preservation of these minerals that formed in aqueous systems can provide crucial constraints on the chemistry and availability of water that are needed to evaluate habitability conditions on Mars. Furthermore, characterizing the modern surface composition is the essential first step in deconvoluting geochemical cycles as well as assessing regolith toxicity, important for future robotic, sample return, and human missions to Mars.

In simple terms, what are oxychlorine species and where have they been found on Mars?

Oxychlorine species are chemical compounds composed of chlorine and oxygen, ranging from stable salts like perchlorate and chlorate to reactive gases and transient intermediates. This diversity arises from the multiple oxidation states of chlorine, which vary from -1 in chloride (Cl-) to +7 in perchlorate (ClO4-). While perchlorate and chlorate (ClO3-) have been identified on Mars, highly reactive intermediates are also likely to exist, at least transiently, during oxychlorine formation and destruction processes.

These compounds are widely distributed across the Martian surface. The Phoenix lander first detected them in the northern plains, while the Curiosity and Perseverance rovers have confirmed their presence in soil, sediment, and rock samples within the Gale and Jezero craters, respectively. Furthermore, oxychlorine salts have been identified as inclusions within pristine Martian meteorites. These widespread detections suggest that oxychlorines are a global component of the Martian regolith, influencing the planet’s geochemical and environmental evolution.

The locations of oxychlorine detections on the surface of Mars. Credit: Mitra [2025], Figure 2

How do scientists detect and sample oxychlorine species?

Scientists have successfully employed various analytical techniques to identify oxychlorine species on the surface of Mars. The Phoenix lander used ion selective electrodes in the Wet Chemistry Laboratory (WCL) to detect perchlorate anions in the Martian regolith. Additional measurements from the Thermal and Evolved Gas Analyzer (TEGA) and the Surface Stereo Imager (SSI) also confirmed the presence of perchlorate anions. At Gale Crater, the Curiosity rover’s Sample Analysis at Mars (SAM) instrument identified these species by heating samples and measuring the evolution of oxygen and chlorine-bearing gases, such as HCl.

More recently, the Perseverance rover used its Raman and X-ray fluorescence spectroscopy instruments—SHERLOC, SuperCam, and PIXL— to detect oxychlorine species within altered rock assemblages at Jezero Crater. Beyond in situ analysis, orbital instruments like CRISM can be used to detect hydrated oxychlorine salts using visible and near-infrared spectroscopy. Finally, multiple analytical methods in terrestrial laboratories can detect oxychlorine species using spectroscopy, chromatography, and diffraction techniques.

What are recent advances in our understanding of oxychlorine formation and destruction on Mars?

Early research focused on atmospheric production, but the low abundance of oxygen-bearing gases in the Martian atmosphere failed to explain the high concentrations of perchlorate on Mars. Recent studies have identified three additional formation mechanisms: plasma redox chemistry during electrostatic discharges, heterogeneous reactions between chlorine-bearing salts and energetic radiation, and aqueous processes. Among these, the irradiation of chloride minerals and ices by ultraviolet light or galactic cosmic rays is particularly effective on contemporary Mars because the thin atmosphere allows radiation to interact directly with the surface.

Regarding destruction, perchlorate salts can degrade into chlorate when exposed to galactic cosmic radiation. Furthermore, chlorate can be effectively consumed by dissolved ferrous iron or ferrous minerals at temperatures as low as 273 K. While perchlorate remains kinetically stable in the presence of most redox-sensitive materials, reactive intermediates like hypochlorite (ClO–) and ClO2 gas readily react with organic compounds, leading to their mutual destruction.

Oxychlorine cycle on Mars. Credit: Mitra [2025], Figure 5

What does the presence of oxychlorine tell us about Mars’ history?

Oxychlorine species record the unique environmental history of Mars. Chlorine isotope data and detections in meteorites, such as Tissint and EETA79001, suggest an active oxychlorine cycle spanning 4 billion years, indicating that oxidizing fluids have been widespread throughout Martian history. Unlike Earth, where the nitrate-to-perchlorate ratio is high (~104), the ratio on Mars is less than one, except for inclusion in EETA79001. This discrepancy highlights fundamentally different geochemical fixation processes and nitrogen-chlorine cycles between the two planets.

Furthermore, chlorates are effective iron oxidants under Mars-relevant conditions and likely contribute to the formation of the planet’s ubiquitous ferric minerals. Additionally, as potent freezing point depressants, these salts may stabilize transient liquid brines even in modern equatorial regions. As a halogen-rich planet, Mars hosts a reactive surface chemistry where oxychlorine species play a substantially more dominant role than they do on Earth.

Is the presence of oxychlorine species helpful or harmful to human exploration and possible use of Mars?

Oxychlorine species can act as a potential hazard as well as a critical in situ resource for future human exploration.

Oxychlorine species can act as a potential hazard as well as a critical in situ resource for future human exploration. Perchlorate and chlorate salts can thermally decompose to release molecular oxygen (O2) and can thus potentially be used for human consumption. Approximately 60 kg of the Martian regolith, containing ~0.5 to 1 wt.% oxychlorine salt, could theoretically provide a single person’s daily oxygen supply. On the other hand, perchlorate is a well-known contaminant in drinking water since it interferes with thyroid functioning and can cause a goiter. Therefore, perchlorate in the Martian regolith could be a possible source of contamination for drinking water or agricultural systems. Owing to high chemical reactivity and oxidation potential, oxychlorine salts present in the Martian regolith are likely to pose persistent cleaning challenges for habitats, suits, and equipment during extra vehicular activity (EVA) on Mars. Additionally, agriculture in the oxychlorine-laden regolith might lead to contamination of plants and vegetables and could eventually lead to biomagnification in humans.

What are some of the remaining questions where additional research is needed?

While oxychlorine research has flourished over the last two decades, critical gaps remain regarding the spatial distribution and formation rates of distinct species. Recent detections of atmospheric HCl and electrostatic discharges necessitate a rigorous re-evaluation of Martian atmospheric chemistry. By leveraging emerging terrestrial models of chlorate formation, new pathways for Martian oxychlorine production can be proposed. Determining the relative contributions of atmospheric, plasma redox, and heterogeneous pathways is vital to understanding the evolution of the chlorine cycle and estimating equilibrium concentrations and residence times.

Furthermore, the chemical reactivity of transient intermediates, specifically ClO2 gas and chlorite, remains poorly understood regarding organic preservation at low temperatures. We also require precise thermodynamic data on complex salt mixtures to accurately predict brine stability. Ultimately, experimental validation of these salts as a viable in situ resource for oxygen and fuel is imperative for future human exploration and the interpretation of returned Martian samples.

—Kaushik Mitra (kaushik.mitra@utsa.edu; 0000-0001-9673-1032), The University of Texas at San Antonio, United States

Citation: Mitra, K. (2026), A double-edged sword: the global oxychlorine cycle on Mars, Eos, 107, https://doi.org/10.1029/2026EO265004. Published on 10 February 2026. This article does not represent the opinion of AGU, Eos, or any of its affiliates. It is solely the opinion of the author(s). Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

The AMOC of the Ice Age Was Warmer Than Once Thought

Tue, 02/10/2026 - 14:07

A major part of the Atlantic Meridional Overturning Circulation (AMOC), a large-scale ocean circulation pattern, was warmer during the peak of Earth’s last ice age than previously thought, according to a new study published in Nature

The study’s results contrast with those from previous studies hinting that the North Atlantic was relatively cold and that AMOC was weaker when faced with major climate stress during the Last Glacial Maximum (LGM), about 19,000–23,000 years ago. 

The findings add confidence to models that scientists use to project how AMOC may change in the future as the climate warms, said Jack Wharton, a paleoceanographer at University College London and lead author of the new study.

Deepwater Data

The circulation of AMOC, now and in Earth’s past, requires the formation of dense, salty North Atlantic Deep Water (NADW), which brings oxygen to the deep ocean as it sinks and helps to regulate Earth’s climate. Scientists frequently use the climatic conditions of AMOC during the LGM as a test to determine how well climate models—like those used in major global climate assessments—simulate Earth systems. 

However, prior to the new study, few data points existed to validate scientists’ models showing the state of NADW during the LGM. Scientists in 2002 analyzed fluid in ocean bottom sediment cores from four sites in the North Atlantic, South Pacific, and Southern Oceans, with results suggesting that deep waters in all three were homogeneously cold.

Researchers sampled 16 sediment cores from across the North Atlantic to deduce how waters may have circulated during the peak of the last ice age. Credit: Jack Wharton, UCL

“The deep-ocean temperature constraints during the [Last Glacial Maximum] were pretty few and far between,” Wharton said. And to him, the 2002 results were counterintuitive. It seemed more likely, he said, that the North Atlantic during the peak of the last ice age would have remained mobile and that winds and cold air would have cooled and evaporated surface waters, making them saltier, denser, and more prone to create NADW and spur circulation.

“This is quite new,” he remembered thinking. “What kind of good science could help show that this is believable?”

Wharton and his colleagues evaluated 16 sediment cores collected across the North Atlantic. First, they measured the ratio of trace magnesium and calcium in microscopic shells of microorganisms called benthic foraminifera. This ratio relates to the temperature at which the microorganisms lived. The results showed much warmer North Atlantic Deep Water than the 2002 study indicated. 

Wharton felt cautious, especially because magnesium to calcium ratios are sometimes affected by ocean chemistry as well as by temperature: “This is quite new,” he remembered thinking. “What kind of good science could help show that this is believable?”

The team, this time led by Emilia Kozikowska, a doctoral candidate at University College London, verified the initial results using a method called clumped isotope analysis, which measures how carbon isotopes in the cores are bonded together, a proxy for temperature. The team basically “did the whole study again, but using a different method,” Wharton said. The results aligned. 

Ratios of magnesium to calcium contained in benthic foraminifera, tiny microbes living in marine sediment, offer insights into the temperature of North Atlantic waters thousands of years ago. Credit: Jack Wharton and Mark Stanley

Analyzing multiple temperature proxies in multiple cores from a broad array of locations made the research “a really thorough and well-done study,” said Jean Lynch-Stieglitz, a paleoceanographer at the Georgia Institute of Technology who was not part of the research team but has worked closely with one of its authors. 

The results, in conjunction with previous salinity data from the same cores, allowed the team to deduce how the North Atlantic likely moved during the LGM. “We were able to infer that the circulation was still active,” Wharton said. 

Modeling AMOC

The findings give scientists an additional benchmark with which to test the accuracy of climate models, Lynch-Stieglitz said. “LGM circulation is a good target, and the more that we can refine the benchmarks…that’s a really good thing,” she said. “This is another really nice dataset that can be used to better assess what the Last Glacial Maximum circulation was really doing.”

“Our data [are] helping show that maybe AMOC was sustained.”

In many widely used climate models, North Atlantic circulation during the LGM looks consistent with the view provided by Wharton’s team’s results, indicating that NADW was forming somehow during the LGM, Lynch-Stieglitz said. However, no model can completely explain all of the proxy data related to the LGM’s climatic conditions.

“Our data [are] helping show that maybe AMOC was sustained,” which helps reconcile climate models with proxy data, Wharton said. Lynch-Stieglitz added that a perhaps equally important contribution of the new study is that it removes the sometimes difficult-to-simulate benchmark of very cold NADW during the LGM that was suggested in research in the early 2000s. “We don’t have to make the whole ocean super cold [in models],” she said.

Some climate models suggest that modern-day climate change may slow AMOC, which could trigger a severe cooling of Europe, change global precipitation patterns, and lead to additional Earth system chaos. However, ocean circulation is highly complex, and models differ in their ability to project future changes. Still, “if they could do a great job with LGM AMOC, then we would have a lot more confidence in their ability to project a future AMOC,” Lynch-Stieglitz said.

Wharton said the results also suggest that another question scientists have been investigating about the last ice age—how and why it ended—may be worth revisiting. Many hypotheses rely on North Atlantic waters being very close to freezing during the LGM, he said. “By us suggesting that maybe they weren’t so close to freezing…that sort of necessitates that people might need to rethink the hypotheses.”

—Grace van Deelen (@gvd.bsky.social), Staff Writer

Citation: van Deelen, G. (2026), The AMOC of the ice age was warmer than once thought, Eos, 107, https://doi.org/10.1029/2026EO260053. Published on 10 February 2026. Text © 2026. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Why Are Thunderstorms More Intense Over Land Than Ocean?

Mon, 02/09/2026 - 19:08
Editors’ Highlights are summaries of recent papers by AGU’s journal editors. Source: Geophysical Research Letters 

Thunderstorms, produced when air rises through the depth of the troposphere, are notoriously difficult to represent in global climate models. Whether air parcels have the energy to rise or not does not depend solely on their characteristics, notably their “Convective Available Potential Energy” (CAPE). It is relative to the state of the environment around them. Specifically, the intensity that they reach, which translates into the potential to produce hail, lightning or damaging winds, depends on how much surrounding air is “entrained” from the sides as the air rises.

Peters et al. [2026] propose a new formulation for CAPE, that they call ECAPE for Entraining CAPE, which incorporates the effect of entrainment from first principles. To verify their theory, they first show that it predicts the geographical distribution of thunderstorms hotspots, such as the U.S. Great Plains, the Pampas of South America, and the African Sahel. They then use it to explain why thunderstorms are more intense over land than over oceans: because of a higher lifting condensation level (LCL) over land, that is, a higher bar that rising air has to reach before it can rise all the way to the top. In addition to solving this longstanding issue, the very fine resolution of the analysis (100m, 1hr) provides an invaluable benchmark for the current generation of kilometer-scale global models being developed.

Citation: Peters, J. M., Chavas, D. R., Su, C.-Y., Murillo, E. M., & Mullendore, G. L. (2026). A unified theory for the global thunderstorm distribution and land–sea contrast. Geophysical Research Letters, 53, e2025GL120252. https://doi.org/10.1029/2025GL120252   

—Alessandra Giannini, Editor, Geophysical Research Letters

Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Coastal Wetlands Restoration, Carbon, and the Hidden Role of Groundwater

Mon, 02/09/2026 - 18:30
Editors’ Vox is a blog from AGU’s Publications Department.

Coastal (tidal) wetlands are low-lying ecosystems found where land meets the sea, including mangroves, saltmarshes, and seagrass meadows. They are shaped by tides and support a mix of marine and terrestrial processes. However, agricultural and urban development over the past century have drained, modified, or degraded many of these coastal wetland ecosystems and now require restoration efforts.

A new article in Reviews of Geophysics explores how subsurface hydrology and biogeochemical processes influence carbon dynamics in coastal wetlands, with a particular focus on restoration. Here, we asked the lead author to give an overview of why coastal wetlands matter, how restoration techniques are being implemented, and where key opportunities lie for future research.

Why are coastal wetlands important?

Coastal wetlands provide many benefits to both nature and people. They protect shorelines from storms and erosion, support fisheries and biodiversity, improve water quality by filtering nutrients and pollutants, and store large amounts of carbon in their soils. Despite covering a relatively small area globally, they punch well above their weight in terms of ecosystem services, making them critical environments for climate regulation, coastal protection, and food security.

What role do coastal wetlands play in the global carbon cycle?

Coastal wetlands are among the most effective natural systems for capturing and storing carbon.

Coastal wetlands are among the most effective natural systems for capturing and storing carbon. This stored carbon is often referred to as “blue carbon”. Vegetation in these ecosystems, such as mangroves, saltmarsh, and seagrass, take up carbon dioxide from the atmosphere through photosynthesis and transfer it to sediments through roots. These plants can store carbon 40 times faster than terrestrial forests. Because coastal wetland sediments are often waterlogged and low in oxygen, this carbon can be stored for centuries to millennia. In addition to surface processes, groundwater plays an important but less visible role by transporting dissolved carbon into and out of wetlands. Understanding these hidden subsurface pathways is essential for accurately estimating how much carbon wetlands store and how they respond to environmental change.

How has land use impacted coastal wetlands over the past century?

Over the past century, coastal wetlands have been extensively altered or lost due to human activities. Large areas have been drained, filled, or isolated from tides to support agriculture, urban development, ports, and flood protection infrastructure. These changes disrupt natural water flow, reduce plant productivity, and expose carbon-rich soils to oxygen, which can release stored carbon back into the atmosphere as greenhouse gases. In many regions, groundwater flow paths have also been modified by drainage systems and groundwater extraction, further altering wetland function. As a result, many coastal wetlands have shifted from long-term carbon sinks to sources of emissions.

How could restoring wetlands help to combat climate change?

Restoring coastal wetlands can help combat climate change by re-establishing natural processes that promote long-term carbon storage.

Restoring coastal wetlands can help combat climate change by re-establishing natural processes that promote long-term carbon storage. When tidal flow and natural hydrology are restored, wetland plants can recover, sediment accumulation increases, and carbon burial resumes. Importantly, restoration can also reconnect groundwater and surface water systems, helping stabilize (redox) conditions that favor carbon preservation in sediments. While wetlands alone cannot solve climate change, they offer a powerful nature-based solution that delivers climate mitigation alongside co-benefits such as coastal protection, biodiversity recovery, and improved water quality. Getting restoration right is key to ensuring these systems act as carbon sinks rather than sources.

What are the main strategies being deployed to restore coastal wetlands?

Common restoration strategies include removing or modifying levees and tidal barriers, reconnecting wetlands to natural tidal regimes, re-establishing natural vegetation through improving the hydrology of the site, and managing sediment supply. Increasingly, restoration projects are recognizing the importance of subsurface processes, such as groundwater flow and salinity dynamics, which strongly influence vegetation health and carbon cycling. Successful restoration requires site-specific designs that consider hydrology, geomorphology, and long-term sea-level rise.

What are some remaining questions where additional research efforts are needed?

Despite growing interest in wetland restoration, major knowledge gaps remain. One key challenge is quantifying how groundwater processes influence carbon storage and greenhouse gas emissions across different wetland types and climates. We also need better long-term measurements to assess whether restored wetlands truly deliver sustained carbon benefits under rising sea levels and increasing climate variability. Finally, integrating hydrology, biogeochemistry, and ecology into predictive models remains difficult but essential. Addressing these gaps will improve carbon accounting, guide smarter restoration investments, and strengthen the role of coastal wetlands in climate mitigation strategies.

—Mahmood Sadat-Noori (mahmood.sadatnoori@jcu.edu.au; 0000-0002-6253-5874), James Cook University: Townsville, Australia

Editor’s Note: It is the policy of AGU Publications to invite the authors of articles published in Reviews of Geophysics to write a summary for Eos Editors’ Vox.

Citation: Sadat-Noori, M. (2026), Coastal wetlands restoration, carbon, and the hidden role of groundwater, Eos, 107, https://doi.org/10.1029/2026EO265003. Published on 9 February 2026. This article does not represent the opinion of AGU, Eos, or any of its affiliates. It is solely the opinion of the author(s). Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

A Road Map to Truly Sustainable Water Systems in Space

Mon, 02/09/2026 - 14:21
Source: Water Resources Research

If humans want to live in space, whether on spacecraft or the surface of Mars, one of the first problems to solve is that of water for drinking, hygiene, and life-sustaining plants. Even bringing water to the International Space Station (ISS) in low Earth orbit costs on the order of tens of thousands of dollars. Thus, finding efficient, durable, and trustworthy ways to source and reuse water in space is a clear necessity for long-term habitation there.

Current systems, like the Environmental Control and Life Support System (ECLSS) on the ISS, offer a blueprint for closed-loop water reclamation, but they need improvements for future applications. Meanwhile, recent technological and scientific advances are pointing to new ways of finding, purifying, and managing water resources in demanding environments. In a new review, Olawade et al. provide an overview of the current state of extraterrestrial water management, as well as of the field’s prospects and challenges.

Water systems in space need to be closed loop, highly efficient, and durable, all while having low energy requirements, the authors say. Currently, the ECLSS is prohibitively energy intensive, and may not be efficient enough, for use on longer missions. Future suggested approaches for filtration and recycling include photocatalysis to purify water via light, bioreactors to filter urine and wastewater, ion-exchange systems to remove dissolved salts and heavy metals from extracted water, and ultraviolet or ozone disinfection to kill pathogens. Each comes with its own pros and cons: Microbial fuel cells in bioreactors could produce electricity, for example, but photocatalytic purification has low energy demands.

Sourcing water on places like the Moon or Mars would require either extracting water bound up in regolith or drilling into ice bodies. Sufficiently powering water reclamation systems is another concern, making energy-efficient systems a priority. Water system durability is also important, both to protect inhabitants and to reduce the need for onerous maintenance work.

Emerging technologies could meet many of these challenges. The authors point to advances in nanotechnology, which could be used to create highly tailored membranes for filtration that are more effective and resistant to fouling, and to the use of artificial intelligence (AI) to autonomously manage water systems, as two areas of promise. (Water Resources Research, https://doi.org/10.1029/2025WR041273, 2026)

—Nathaniel Scharping (@nathanielscharp), Science Writer

Citation: Scharping, N. (2026), A road map to truly sustainable water systems in space, Eos, 107, https://doi.org/10.1029/2026EO260023. Published on 9 February 2026. Text © 2026. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Graduate Students’ NSF Fellowship Applications Are Being “Returned Without Review”

Fri, 02/06/2026 - 20:45
body {background-color: #D2D1D5;} Research & Developments is a blog for brief updates that provide context for the flurry of news regarding law and policy changes that impact science and scientists today.

Students who have applied for the Graduate Research Fellowship Program (GRFP) from the National Science Foundation (NSF) have had their applications returned without review—even though their proposed research appears to fall squarely within the fields of study outlined in the program solicitation.

In response, a group of scientists created a template letter for students to share concerns with their representatives.

GRFP provides 3 years of financial support over a 5-year fellowship program for outstanding graduate students pursuing full-time degrees in science, technology, engineering, or math (STEM), including STEM education. The program solicitation, posted in September 2025, lists the following fields as eligible.

  1. Chemistry
  2. Computer and Information Sciences and Engineering
  3. Engineering
  4. Geosciences
  5. Life Sciences
  6. Materials Research
  7. Mathematical Sciences
  8. Physics & Astronomy
  9. Psychology
  10. Social, Behavioral, and Economic Sciences
  11. STEM Education and Learning Research

However, at least dozens of applicants in those fields have received emails, obtained by Eos, that stated that their proposals were ineligible.

 Related

“The proposed research does not meet NSF GRFP eligibility requirements. Applicants must select research in eligible STEM or STEM education fields,” the email read.

Neuroscience, physiology, ecology/biogeochemistry, and chemistry of life sciences are among the proposal research topics that have been returned without review (RWR), according to posts on Reddit and Bluesky.

One Redditor described the RWR as “soul-crushing.” “The dropdown menu part is what gets me,” they wrote, referring to how they selected a category from a list within the application. “What do you mean I am ineligible in a category that YOU provided?!”

Karolina Heyduk, an ecologist and evolutionary biologist at the University of Connecticut, shared on Bluesky that one of her student’s applications was rejected. Heyduk told Eos over email that she has no idea why, as the research—on photosynthesis in bromeliads—was “clearly within stated fields that are eligible, and had no agriculture, health, or policy angles.”

“The GRFP is an opportunity for new scientists to propose their best ideas and get their first shot at external funding. While not everyone will be funded, there is some expectation of a fair and transparent review process, and that doesn’t seem to be happening this year. For new grad students, or those applying this year, the outright rejection without a clear reason is incredibly discouraging,” she told Eos.

Rejected Appeals

Some applicants have appealed the decision, after having advisers look over their applications, and have received responses, also obtained by Eos, affirming that the decision is final.

“As your application was thoroughly screened based on these eligibility criteria, the RWR determination will stand and there will be no further consideration of your application,” the email text read.

Last March, the New York Times compiled, via government memos, agency guidance, and other documents, a list of words that the Trump administration indicated should be avoided or limited. The list included “climate science,” “diversity,” “political,” and “women.”

On Reddit threads, applicants who received RWR are speculating over whether their applications may have been automatically rejected for the use of so-called banned words. One student used the word “underrepresented” in a personal statement, to reference a program to which they had previously been accepted. Others, applying for neuroscience fellowships that involved studies with rats, wondered whether the word “ethanol” had been flagged. Another said they had tried to avoid using banned words, but that it was “unavoidable.”

“My project is about bears and ‘black’ is a trigger word,” they wrote. “Insane.”

Reaching out to Representatives

The group behind the template letter for students includes Noam Ross, who is among the creators of Grant Witness, a project to track the termination of scientific grants under the Trump administration. The letter notes that, after NSF awarded significantly fewer GRFP awards than usual in the spring, it released its guidance for this year’s application more than a month later than usual—leaving students with much less time than usual to complete their applications, and leaving others ineligible to apply.

“I request that you contact the NSF administrator to ask why eligible GRFP applications are being rejected without review and to ask them to remedy the situation quickly, as review panels are convening imminently,” the letter reads. “We cannot allow the continued degradation of our scientific workforce, and [the cutting] off the opportunities for so many future scientists.”

—Emily Gardner (@emfurd.bsky.social), Associate Editor

These updates are made possible through information from the scientific community. Do you have a story about how changes in law or policy are affecting scientists or research? Send us a tip at eos@agu.org. Text © 2026. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

From Measurements to Solar Wind Model Initial Conditions

Fri, 02/06/2026 - 19:39
Editors’ Highlights are summaries of recent papers by AGU’s journal editors. Source: Space Weather 

The solar wind is a continuous stream of charged particles released from the Sun into the solar system. It plays a major role in space weather, which can impact satellites, astronauts, and power systems on Earth. Forecasting the solar wind often depends on detailed maps of the Sun’s magnetic field and complex models of the solar corona, which introduce uncertainty and are not always available.

Owens et al. [2026] present a new approach that uses solar wind measurements near Earth to reconstruct solar wind conditions closer to the Sun. By tracing the solar wind back towards its source, the method provides realistic starting conditions for solar wind models without relying on magnetic maps. The authors show that this approach can produce realistic solar wind conditions while reducing assumptions and sources of error. This simpler set-up allows the method to be applied consistently across different modelling frameworks.

This work represents an important step towards more robust and accessible solar wind modeling. In the long term, it can help improve space weather forecasts and our ability to protect technology and infrastructure in space and on Earth.

Citation: Owens, M. J., Barnard, L. A., Turner, H., Gyeltshen, D., Edward-Inatimi, N., O’Donoghue, J., et al. (2026). Driving dynamical inner-heliosphere models with in situ solar wind observations. Space Weather, 24, e2025SW004675. https://doi.org/10.1029/2025SW004675

—Tanja Amerstorfer, Associate Editor, Space Weather

Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

How the Spring Thaw Influences Arsenic Levels in Lakes

Fri, 02/06/2026 - 14:08
Source: Journal of Geophysical Research: Biogeosciences

From 1948 to 1953, a gold mine called Giant Mine released about 5 tons of arsenic trioxide per day into the environment around Yellowknife, Northwest Territories, Canada. Emissions declined from the 1950s until the mine closed in 2004, but the surrounding landscape remains highly contaminated with arsenic.

Little et al. recently studied how the spring thaw influences arsenic levels in four Yellowknife area lakes and how phytoplankton populations alter arsenic biogeochemistry during this transition period. The researchers sampled each lake twice per year in 2022 and 2023: once in late April, before the beginning of the spring thaw period, and once 7–10 days later, when the thaw had begun but the ice was still thick enough to safely walk on, making sample collection feasible.

Sammy’s, Handle, Frame, and Jackfish lakes spanned a gradient of arsenic contamination levels when measured before the thaw in 2022—from 5.5 micrograms per liter in Sammy’s Lake to 350 micrograms per liter in Frame Lake. In Handle, Frame, and Jackfish lakes, arsenic levels went down as the spring thaw began, but Sammy’s Lake followed the opposite trend. The difference likely lies in how much arsenic the lakes contained to begin with. With Sammy’s Lake starting at such a low level, arsenic from meltwater exacerbated the contamination. In the other three lakes, the concentration of arsenic in meltwater was lower than or similar to the starting concentration in the lake, so meltwater diluted the contamination.

Arsenic exists mostly in two oxidation states: arsenite and the less toxic, less mobile arsenate. Because arsenate is more stable under oxic conditions, the influx of highly oxygenated snow and ice meltwater during the spring thaw period was accompanied by a predictable shift in the predominant form of arsenic in the lakes.

The winter of 2022 was significantly colder than 2023, with the difference reflected in the thickness of the ice: 76–130 centimeters in 2022 compared with 65–72 centimeters in 2023. The warm winter did not alter the final arsenic concentration or speciation in the water at the end of the thaw. However, an increase was observed in plankton communities in more mature life stages and in taxa that are more competitive in warmer conditions. This result is important, the authors say, because late winter and spring thaw plankton community dynamics set the stage for the following open-water season. (Journal of Geophysical Research: Biogeosciences, https://doi.org/10.1029/2025JG009231, 2026)

—Saima May Sidik (@saimamay.bsky.social), Science Writer

Citation: Sidik, S. M. (2026), How the spring thaw influences arsenic levels in lakes, Eos, 107, https://doi.org/10.1029/2026EO260051. Published on 6 February 2026. Text © 2026. AGU. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Primordial Impact May Explain Why the Moon Is Asymmetrical

Fri, 02/06/2026 - 14:07

Until 1959, nobody on Earth had ever seen our Moon’s farside. Thanks to gravitational tidal forces, the lunar nearside always faces us, so it was surprising for everyone to learn that the other half of the Moon looks strikingly different. Not only that, but subsequent observations showed the lunar farside has a thicker surface than the nearside, and its rocks have different compositions.

And nobody knows exactly why.

However, some scientists think the solution to the mystery involves a site known as the South Pole–Aitken (SPA) basin, which was created by an asteroid impact early in the solar system’s history. A new study published in the Proceedings of the National Academy of Sciences of the United States of America draws from surface samples returned by the Chang’e-6 (pronounced CHAHNG-ua) robotic probe. The samples contain minuscule differences in chemical and isotopic composition that indicate the ancient impact may have vaporized part of the Moon’s interior, enough to account for the differences between the near- and farsides.

“Chang’e-6 currently provides the only samples returned from the lunar far side,” said planetary geochemist Heng-Ci Tian ( 田恒次) of the Institute of Geology and Geophysics at the Chinese Academy of Sciences, Beijing, in an email to Eos. Comparing these samples to those collected by previous Chang’e probes and the Apollo missions, Tian and his colleagues determined the impact did more than just make a big crater: It reshuffled the geological components of the lunar mantle. In particular, they looked at isotopes of moderately volatile substances such as potassium that vaporize at relatively low temperatures, rather than more abundant, low-mass elements like hydrogen and oxygen.

If this hypothesis holds up under further scrutiny, it would not only tell us about our Moon’s history and origins but help us understand planetary evolution in general.

As with Earth, the Moon’s mantle—the relatively plastic layer of minerals between the crust and core—is the source of the magma that once powered volcanoes. The nearside is marked by ancient volcanic flows, known as “mares” (pronounced MAR-ays) or “seas,” which are largely absent on the farside.

“Our study reveals that the SPA basin impact caused [evaporation] of moderately volatile elements in the lunar mantle,” Tian said. “The loss of these volatile elements likely suppressed magma generation and volcanic eruptions on the far side.”

If this hypothesis holds up under further scrutiny, it would not only tell us about our Moon’s history and origins but help us understand planetary evolution in general. After all, Earth’s surface is constantly renewed by plate tectonics and hydrologic processes, but other worlds such as Mars and Venus are less dynamic, and much of what is going on inside is still mysterious.

“There’s so many uncertainties as to really what happened [when SPA formed] and how it would’ve affected the interior,” said Kelsey Prissel, a planetary scientist at Purdue University in Indiana who was not involved in the study. She pointed out that different geophysical processes like crystallization and evaporation lead to different populations of isotopes. The new study, which shows a larger fraction of certain isotopes in the SPA region than on the lunar nearside, therefore provides strong evidence that the farside mantle was partially vaporized long ago.

“Previous studies have shown that impacts alter the composition and structure of the lunar surface and crust, but our study provides the first evidence that large impacts play an important role in planetary mantle evolution,” Tian said.

It Came from the Farside!

The SPA basin is one of the biggest impact craters in the solar system, so huge it doesn’t even look like a crater: It stretches all the way from the lunar South Pole to the Aitken crater (hence the name) at approximately 16°S latitude. Researchers determined it formed about 4.3 billion years ago—not long, in cosmic terms, after the Moon was born. Interestingly, it is also almost directly antipodal to a cluster of volcanoes on the lunar nearside, which suggested to some scientists the features might be related.

However, the farside is harder to study. Humanity’s first view came only in 1959 with the uncrewed Soviet Luna 3 orbiter, and none of the Apollo missions landed there. Robotic spacecraft have mapped the entire Moon in detail, but the Chang’e-4 probe achieved humanity’s first farside landing in 2019. Chang’e-6 landed in the SPA basin on 1 June 2024 and returned the first (and so far only) samples from the lunar farside to Earth on 25 June.

“[If] you looked at this data 20 years ago, [the samples] would all look the same.”

The next phase in the study was comparing the chemical makeup and isotopes in these rocks to their nearside counterparts collected by the Apollo astronauts and the Chang’e-5 mission. In particular, Tian and his colleagues looked at potassium (K), rare-earth elements, and phosphorous, collectively known as KREEP, which are possibly related to mantle composition. As the researchers noted in their paper, if the SPA impact vaporized materials in the Moon’s mantle, it might also have redistributed KREEP-rich minerals from the farside to the nearside. Testing this hypothesis required doing very sensitive laboratory measurements that weren’t possible in the Apollo era.

“Having the far side samples is brand-new no matter what,” Prissel said. “But looking at these really fine differences between isotopes is something we haven’t been able to do forever. [If] you looked at this data 20 years ago, [the samples] would all look the same.”

Prissel’s point highlights the interdependency of different branches of planetary science: Understanding the Moon’s interior requires studying samples, performing laboratory experiments on them (or on analog materials), and running theoretical models. These new results will inform the next set of experiments and modeling, as well as guide future lunar sample return missions.

“We plan to analyze additional volatile isotopes to verify our conclusions,” Tian said. “We will combine these with numerical modeling to further evaluate the global effects of the SPA impact.”

—Matthew R. Francis (@BowlerHatScience.org), Science Writer

Citation: Francis, M. R. (2026), Primordial impact may explain why the Moon is asymmetrical, Eos, 107, https://doi.org/10.1029/2026EO260050. Published on 6 February 2026. Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Satellite imagery of the 24 January 2026 landslide on Gunung Burangrang in Indonesia

Fri, 02/06/2026 - 09:47

Imagery is now available that shows the aftermath of 3.1 km long landslide that killed about 90 people in West Bandung.

On 24 January 2026, a major landslide occurred on the flanks of Gunung Burangrang (Mount Burangrang) in West Bandung, Indonesia. The search has been long and painstaking, but it is thought that 92 people were killed. There were 23 reported survivors.

AFA Channel has posted some very good drone footage of the landslide to Youtube (excuse the dramatic music and the incorrect headline):-

Planet Labs have also captured a good satellite image of the site. I have overlain this onto the Google Earth DEM:-

Satellite image of the 24 January 2026 landslide on Gunung Burangrang in Indonesia. Image copyright Planet Labs, used with permission.

By way of comparison, this is the site prior to the landslide (image from February 2025):-

Google Earth image of the site of the 24 January 2026 landslide on Gunung Burangrang in Indonesia.

And here is a slider to allow a comparison between the images:-

This appears to have been a deep-seated, probably structurally-controlled failure on high, very steep slopes of Gunung Burangrang, which has then transitioned into a channelised flow. There is considerable entrainment along the track. The landslide is about 3.1 km long and up to 150 m wide.

There has been considerable discussion in Indonesia about the role of logging and mining in the causation of these large landslide events, but in this case neither are apparent in the source area. Institut Teknologi Bandung has a nice article about causation of this landslide, which notes that the underlying geology is volcanic. Loyal readers of this blog will recognise the frequency with which intense rainfall triggers devastating landslides in volcanic materials.

Acknowledgement

Many thanks to the wonderful people at Planet Labs for providing access to the satellite imagery.

Return to The Landslide Blog homepage Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Meet the Mysterious Electrides

Thu, 02/05/2026 - 14:22

This story was originally published by Knowable Magazine.

For close to a century, geoscientists have pondered a mystery: Where did Earth’s lighter elements go? Compared to amounts in the Sun and in some meteorites, Earth has less hydrogen, carbon, nitrogen and sulfur, as well as noble gases like helium—in some cases, more than 99 percent less.

Some of the disparity is explained by losses to the solar system as our planet formed. But researchers have long suspected that something else was going on too.

Recently, a team of scientists reported a possible explanation—that the elements are hiding deep in the solid inner core of Earth. At its super-high pressure—360 gigapascals, 3.6 million times atmospheric pressure—the iron there behaves strangely, becoming an electride: a little-known form of the metal that can suck up lighter elements.

Electrides, in more ways than one, are having their moment.

Study coauthor Duck Young Kim, a solid-state physicist at the Center for High Pressure Science & Technology Advanced Research in Shanghai, says the absorption of these light elements may have happened gradually over a couple of billion years—and may still be going on today. It would explain why the movement of seismic waves traveling through Earth suggests an inner core density that is 5 percent to 8 percent lower than expected were it metal alone.

Electrides, in more ways than one, are having their moment. Not only might they help solve a planetary mystery, they can now be made at room temperature and pressure from an array of elements. And since all electrides contain a source of reactive electrons that are easily donated to other molecules, they make ideal catalysts and other sorts of agents that help to propel challenging reactions.

One electride is already in use to catalyze the production of ammonia, a key component of fertilizer; its Japanese developers claim the process uses 20 percent less energy than traditional ammonia manufacture. Chemists, meanwhile, are discovering new electrides that could lead to cheaper and greener methods of producing pharmaceuticals.

Today’s challenge is to find more of these intriguing materials and to understand the chemical rules that govern when they form.

Electrides at High Pressure

Most solids are made from ordered lattices of atoms, but electrides are different. Their lattices have little pockets where electrons sit on their own.

Normal metals have electrons that are not stuck to one atom. These are the outer, or valence, electrons that are free to move between atoms, forming what is often referred to as a delocalized “sea of electrons.” It explains why metals conduct electricity.

The outer electrons of electrides no longer orbit a particular atom either, but they can’t freely move. Instead, they become trapped at sites between atoms that are called non-nuclear attractors. This gives the materials unique properties. In the case of the iron in Earth’s core, the negative electron charges stabilize lighter elementsat non-nuclear attractors that were formed at those super-high pressures, 3,000 times that at the bottom of the deepest ocean. The elements would diffuse into the metal, explaining where they disappeared to.

In an experiment, scientists simulated the movement of hydrogen atoms (pink) into the lattice structure of iron at a temperature of 3,000 degrees Kelvin (2,727 Celsius), at pressures of 100 gigapascals (GPa) and 300 GPa. At the higher pressure (right) an electride forms, as indicated by the altered distribution of the hydrogen observed within the iron lattice — these would represent the negatively charged non-nuclear attractor sites to which hydrogen atoms bond, forming hydride ions. Duck Young Kim and his coauthors think that the altered hydrogen distribution at higher pressure in these simulations is good evidence that an electride with non-nuclear reactor sites forms within the iron of Earth’s core. Credit: Knowable Magazine, adapted from I. Park et al./Advanced Science 2024

The first metal found to form an electride at high pressure was sodium, reported in 2009. At a pressure of 200 gigapascals (2 million times greater than atmospheric pressure) it transforms from a shiny, reflective, conducting metal into a transparent glassy, insulating material. This finding was “very weird,” says Stefano Racioppi, a computational and theoretical chemist at the University of Cambridge in the United Kingdom, who worked on sodium electrides while in the lab of Eva Zurek at the University at Buffalo in New York state. Early theories, he says, had predicted that at high pressure, sodium’s outer electrons would move even more freely between atoms.

The first sign that things were different came from predictions in the late 1990s, when scientists were using computational simulations to model solids, based on the rules of quantum theory. These rules define the energy levels that electrons can have, and hence the probable range of positions in which they are found in atoms (their atomic orbitals).

Simulating solid sodium showed that at high pressures, as the sodium atoms get squeezed closer together, so do the electrons orbiting each atom. That causes them to experience increasing repulsive forces with one another. This changes the relative energies of every electron orbiting the nucleus of each atom, Racioppi explains—leading to a reorganization of electron positions.

This graphic shows alternative models for metal structures. At left is the structure at ambient conditions, with each blue circle representing a single atom in the metallic lattice consisting of a positively charged nucleus surrounded by its electrons. The electrons can move freely throughout the lattice in what is known as a “sea of electrons.” Earlier theories of metals at high pressures assumed a similar structure, with even greater metallic characteristics (top, right), but more recent modeling shows that in some metals like sodium, at high pressure the structure changes (bottom, right) to a system in which the electrons are localized (dark blue boxes) between the ionic cores (small light blue circles)—an electride. This gives the structure very different properties. Credit: Knowable Magazine, adapted from S. Racioppi and E. Zurek/Ar Materials Research 2025

The result? Rather than occupying orbitals that allow them to be delocalized and move between atoms, the orbitals take on a new shape that forces electrons into the non-nuclear attractor sites. Since the electrons are stuck at these sites, the solid loses its metallic properties.

Adding to this theoretical work, Racioppi and Zurek collaborated with researchers at the University of Edinburgh to find experimental evidence for a sodium electride at extreme pressures. Squeezing crystals of sodium between two diamonds, they used X-ray diffraction to map electron density in the metal structure. This, they reported in September 2025, confirmed that electrons really were located in the predicted non-nuclear attractor sites between sodium atoms.

Just the Thing for Catalysts

Electrides are ideal candidates for catalysts—substances that can speed up and lower the energy needed for chemical reactions. That’s because the isolated electrons at the non-nuclear attractor sites can be donated to make and break bonds. But to be useful, they would need to function at ambient conditions.

Several such stable electrides have been discovered over the last 10 years, made from inorganic compounds or organic molecules containing metal atoms. One of the most significant, mayenite, was found by surprise in 2003 when material scientist Hideo Hosono at the Institute of Science Tokyo was investigating a type of cement.

Mayenite is a calcium aluminate oxide that forms crystals with very small pores—a few nanometers across—called cages, that contain oxygen ions. If a metal vapor of calcium or titanium is passed over it at high temperature, it removes the oxygen, leaving behind just electrons trapped at these sites—an electride.

Unlike the high-pressure metal electrides that switch from conductors to insulators, mayenite starts as an insulator. But now its trapped electrons can jump between cage sites (via a process called quantum tunnelling)—making it a conductor, albeit 100 to 1,000 times less conductive than a metal like aluminum or silver. It also becomes an excellent catalyst, able to surrender electrons to help make and break bonds in reactions.

By 2011, Hosono had begun to develop mayenite as a greener and more efficient catalyst for synthesizing ammonia. Over 170 million metric tons of ammonia, mostly for fertilizers, is produced annually via the Haber-Bosch process, in which metal oxides facilitate hydrogen and nitrogen gases reacting together at high pressure and temperature. It is an energy-intensive, expensive process—Haber-Bosch plants account for some 2 percent of the world’s energy use.

The company estimates that this will avoid 11,000 tons of CO2 emissions annually—about equal to the annual emissions of 2,400 cars.

In Haber-Bosch, the catalysts bind the two gases to their surfaces and donate electrons to help break the strong triple bond that holds the two nitrogen atoms together in nitrogen gas, as well as the bonds in hydrogen gas. Because mayenite has a strong electron-donating nature, Hosono thought mayenite would be able to do it better.

In Hosono’s reaction, mayenite itself does not bind the gases but acts as a support bed for nanoparticles of a metal called ruthenium. First, the nanoparticles absorb the nitrogen and hydrogen gases. Then the mayenite donates electrons to the ruthenium. These electrons flow into the nitrogen and hydrogen molecules, making it easier to break their bonds. Ammonia thus forms at a lower temperature—300 to 400° C—and lower pressure—50 to 80 atmospheres—than with Haber-Bosch, which takes place at 400 to 500° C and 100 to 400 atmospheres.

In 2017, the company Tsubame BHB was formed to commercialize Hosono’s catalyst, with the first pilot plant opening in 2019, producing 20 metric tons of ammonia per year. The company has since opened a larger facility in Japan and is setting up a 20,000-ton-per year green ammonia plant in Brazil to replace some of the nation’s fossil-fuel-based fertilizer production. The company estimates that this will avoid 11,000 tons of CO2 emissions annually—about equal to the annual emissions of 2,400 cars.

There are other applications for a mayenite catalyst, says Hosono, including a lower-energy conversion of CO2 into useful chemicals like methane, methanol or longer-chain hydrocarbons. Other scientists have suggested that mayenite’s cage structure also makes it suitable for immobilizing radioactive isotope waste in nuclear power stations: The electrons could capture negative ions like iodine and bromide and trap them in the cages.

Mayenite has even been studied as a low-temperature propulsion system for satellites in space. When it is heated to 600°C in a vacuum, its trapped electrons blast from the cages, causing propulsion.

Organic Electrides

The list of materials known to form electrides keeps growing. In 2024, a team led by chemist Fabrizio Ortu at the University of Leicester in the UK accidentally discovered another room-temperature-stable electride made from calcium ions surrounded by large organic molecules, together known as a coordination complex.

“You put something in a milling jar, you shake it really hard, and that provides the energy for the reaction.”

He was using a method known as mechanical chemistry—“You put something in a milling jar, you shake it really hard, and that provides the energy for the reaction,” he says. But to his surprise, electrons from the potassium he had added to his calcium complex were not donated to the calcium ion. Instead, what formed “had these electrons that were floating in the system,” he says, trapped in sites between the two metals.

Unlike mayenite, this electride is not a conductor—its trapped electrons do not jump. But they allow it to facilitate reactions that are otherwise hard to get started, by activating unreactive bonds, doing a job much like a catalyst. These are reactions that currently rely on expensive palladium catalysts.

The scientists successfully used the electrideon a reaction that joins two pyridine rings—carbon rings containing a nitrogen atom. They are now examining whether the electride could assist in other common organic reactions, such as substituting a hydrogen atom on a benzene ring. These substitutions are difficult because the bond between the benzene ring carbon and its attached hydrogen is very stable.

There are still problems to sort out: Ortu’s calcium electride is too air- and water-sensitive for use in industry. He is now looking for a more stable alternative, which could prove particularly useful in the pharmaceutical industry to synthesize drug molecules, where the sorts of reactions Ortu has demonstrated are common.

Still Questions at the Core

There remain many unresolved mysteries about electrides, including whether Earth’s inner core definitely contains one. Kim and his collaborators used simulations of the iron lattice to find evidence for non-nuclear attractor sites, but their interpretation of the results remains “a little bit controversial,” Racioppi says.

Sodium and other metals in Group 1 and Group 2 of the periodic table of elements—such as lithium, calcium and magnesium—have loosely bound outer electrons. This helps make it easy for electrons to shift to non-nuclear attractor sites, forming electrides. But iron exerts more pulling power on its outer electrons, which sit in differently shaped orbitals. This makes the increase in electron repulsion under pressure less significant and thus the shift to electride formation difficult, Racioppi says.

Electrides are still little known and little studied, says computational materials scientist Lee Burton of Tel Aviv University. There is still no theory or model to predict when a material will become one. “Because electrides are not typical chemically, you can’t bring your chemical intuition to it,” he says.

“The potential is enormous.”

Burton has been searching for rules that might help with predictions and has had some success finding electrides from a screen of 40,000 known materials. He is now using artificial intelligence to find more. “It’s a complex interplay between different properties that sometimes can all depend on each other,” he says. “This is where machine learning can really help.”

The key is having reliable data to train any model. Burton’s team only has actual data from the handful of electride structures experimentally confirmed so far, but they also are using the kind of modeling based on quantum theory that was carried out by Racioppito create high-resolution simulations of electron density within materials. They are doing this for as many materials as they can; those that are confirmed by real-world experiments will be used to train an AI modelto identify more materials that are likely to be electrides — ones with the discrete pockets of high electron density characteristic of trapped electron sites. “The potential,” says Burton, “is enormous.”

—Rachel Brazil (@rachelbrazil.bsky.social), Knowable Magazine

“This article originally appeared in Knowable Magazine, a nonprofit publication dedicated to making scientific knowledge accessible to all. Sign up for Knowable Magazine’s newsletter.” Read the original article here.

Snowball Earth’s Liquid Seas Dipped Way Below Freezing

Wed, 02/04/2026 - 13:53

Earth froze over 717 million years ago. Ice crept down from the poles to the equator, and the dark subglacial seas suffocated without sunlight to power photosynthesis. Earth became an unrecognizable, alien world—a “snowball Earth,” where even the water was colder than freezing.

In Nature Communications, researchers reported the first measured sea temperature from a snowball Earth episode: −15°C ± 7°C. If this figure holds up, it will be the coldest measured sea temperature in Earth’s history.

For water to be that cold without freezing, it would have to be very salty. And indeed, the team’s analysis suggests that some pockets of seawater during the Sturtian snowball glaciation, which lasted 57 million years, could have been up to 4 times saltier than modern ocean water.

“We’re dealing with salty brines,” said Ross Mitchell, a geologist at the Institute of Geology and Geophysics of the Chinese Academy of Sciences. “That’s exactly what you see in Antarctica today,” he added, except that snowball Earth’s brines were a bit colder than even the −13°C salty slush of Antarctica’s ice-covered Lake Vida today.

Past Iron

The Sturtian snowball was a runaway climate catastrophe that occurred because ice reflects more sunlight than land or water. Ice reflected sunlight, which cooled the planet, which made more ice, which reflected more sunlight and so on, until the whole world ended up buried under glaciers that could have been up to a kilometer thick.

This unusual time left behind unusual rocks: Rusty red iron formations that accumulated where continental glaciers met the ice-covered seas. To take snowball Earth’s temperature, the team devised a new way to use that iron as a thermometer.

Scientists used information about the iron in formations like this one to estimate the temperature of Earth’s ocean 717 million years ago. Credit: James St. John/Flickr, CC BY 2.0

Iron formations accumulate in water that’s rich in dissolved iron. Oxygen transforms the easily dissolved, greenish “ferric” form of iron into rusty red “ferrous” iron that stays solid. That’s why almost all iron formations are ancient, relics of a time before Earth’s atmosphere started filling with oxygen about 2.4 billion years ago, or from the more recent snowball Earth, when the seas were sealed under ice. Unable to soak up oxygen from the air or from photosynthesis, snowball Earth’s dark, ice-covered seawater drained of oxygen.

Iron-56 is the most common iron isotope, but lighter iron-54 rusts more easily. So when iron rusts in the ocean, the remaining dissolved iron is enriched in the heavier isotope. Over many cycles of limited, partial rusting—like what happened on the anoxic Archean Earth—this enrichment grows, which is why ancient iron formations contain isotopically very heavy iron compared to iron minerals that formed after Earth’s atmosphere and oceans filled with oxygen.

Snowball Earth’s iron is heavy, too, even more so than iron formations from the distant, preoxygen past. The researchers realized that temperature could be the explanation: Iron minerals that form in cold water end up istopically heavier. We don’t know exactly how hot it was when the ancient iron formations accumulated, but it was likely warmer than during snowball Earth, when glaciers reached the equator. Using a previous estimate of 25°C for the temperature of Archean seawater, the team calculated that the waters that formed the snowball Earth iron formations would likely have been 40°C colder.

“It’s a very interesting, novel way of getting something different out of iron isotope data,” said geochemist Andy Heard of the Woods Hole Oceanographic Institution, who was not involved in the study. “It’s a funny, backwards situation to be in where you’re using even older rocks as your baseline for understanding something that formed 700 million years ago.”

In part because of that backward situation, Heard thinks the study is best interpreted qualitatively as strong evidence that seawater was really cold, but maybe not that it was exactly −15°C.

The team also analyzed isotopes of strontium and barium to determine that snowball Earth’s seawater was up to 4 times saltier than the modern ocean. Jochen Brocks of the Australian National University, who wasn’t involved in the study, said the researchers’ results align with his own salinity analysis of snowball Earth sediments from Australia based on a different method. Those rocks formed in a brine that Brocks thinks was salty enough to reach −7°C before freezing. Another group reaching a similar conclusion using different methods makes that extreme scenario sound a lot more plausible, he said.

“It was very cool to get the additional confirmation it was actually very, very cold,” he said.

—Elise Cutts (@elisecutts.bsky.social), Science Writer

Citation: Cutts, E. (2026), Snowball Earth’s liquid seas dipped way below freezing, Eos, 107, https://doi.org/10.1029/2026EO260048. Published on 4 February 2026. Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Tsunamis from the Sky

Tue, 02/03/2026 - 14:26
Editors’ Vox is a blog from AGU’s Publications Department.

Meteorological tsunamis, or meteotsunamis, are long ocean waves in the tsunami frequency band that are generated by traveling air pressure and wind disturbances. These underrated phenomena pose serious threats to coastal communities, especially in the era of climate change.

A new article in Reviews of Geophysics explores all aspects of meteotsunamis, from available data and tools used in research to the impacts on coastal communities. Here, we asked the authors to give an overview of these phenomena, how scientists study them, and what questions remain.

In simple terms, what are meteorological tsunamis or “meteotsunamis”?

Meteotsunamis are tsunami-like waves that are not generated by earthquakes or landslides, but by atmospheric processes.

Meteotsunamis are tsunami-like waves that are not generated by earthquakes or landslides, but by atmospheric processes. Their formation requires a strong air pressure or wind disturbance—typically characterized by a pressure change of 1–3 hectopascals over about five minutes—that propagates at a “perfect” speed, allowing long ocean waves to grow. In addition, coastal bathymetry must be sufficiently complex to amplify the incoming waves.

Meteotsunamis are less well known and, fortunately, are generally less destructive than seismic tsunamis. Nonetheless, they can reach wave heights of up to 10 meters and can be highly destructive. One of the most damaging events occurred on June 21, 1978, in Vela Luka, Croatia, where damages amounted to about 7 million US dollars at the time. Meteotsunamis can also cause injuries and fatalities, as unfortunately occurred on January 13, 2026, during the recent Argentina meteotsunami.

What kinds of hazards do meteotsunamis pose to humans and society?

Meteotsunamis are characterized by multi-meter sea level oscillations and, at times, strong currents. As a result, they can flood waterfront areas and households, while strong currents may break ship moorings and disrupt maritime traffic, as occurred in 2014 in Freemantle, Australia. An even greater danger comes from rip currents, which can sweep swimmers away from shore. A notable example is the July 4, 2003, meteotsunami that occurred under clear skies along the beaches of Lake Michigan and claimed seven lives.

Figure 1. Photos from the 1978 Vela Luka meteotsunami, with labeled eyewitness wave height and household’s damage inventory. Credit: Vilibić et al. [2025], Figure 12

How do scientists observe, measure, and reproduce meteotsunamis?

Much of the information on meteotsunamis comes from post-event observations. Following exceptionally strong events, scientists often visit affected locations to conduct field surveys, interview eyewitnesses, collect photos and videos, and estimate the extent and height of the meteotsunami along the coast. More precise information comes from coastal tide gauges and ocean buoys, as well as meteorological observations with at least minute-scale resolution.

Unfortunately, standard atmospheric and oceanic observing systems do not commonly operate at such high temporal resolution. For example, one of the oldest national networks—the UK tide gauge network operating for decades—still uses 15-minute sampling intervals. At the same time, most national meteorological services measure atmospheric variables at 10-minute or even hourly resolution, which is insufficient for meteotsunami research. Nevertheless, some oceanic and meteorological networks do provide appropriate sampling intervals, and even data from school-based or amateur networks can be valuable for research.

In addition, numerical modeling of meteotsunamis is now standard practice and includes both atmospheric and oceanic components. However, accurately reproducing meteotsunami-generating atmospheric processes—and thus meteotsunamis themselves—remains challenging. Addressing this issue and developing more accurate, high-resolution models is a key task for the modeling community.

Why has research on meteotsunamis shifted from localized to a global approach?

Figure 2. Map with known occurrences of meteotsunamis. Size of the star is proportional to the meteotsunami intensity. Credit: Vilibić et al. [2025], Figure 4

The strength of meteotsunamis strongly depends on coastal bathymetry. Within a specific bay, wave heights can reach several meters, while just outside the bay they may be only a few tens of centimeters. For this reason, meteotsunamis were historically observed and studied mainly at individual locations, known as meteotsunami hot spots. Over the past few decades, however, advances in monitoring and modeling capabilities, along with easier global dissemination of scientific results, have revealed that the same phenomenon occurs worldwide. Moreover, the recent availability of hundreds of multi-year, minute-scale sea level records has enabled researchers to conduct global studies and quantify worldwide meteotsunami patterns.

What are the primary ways that meteotsunamis are generated?

The generation of a strong meteotsunami requires (i) an intense, minute-scale air-pressure or wind disturbance that propagates over long distances (tens to hundreds of kilometers), (ii) an ocean region where energy is efficiently transferred from the atmosphere to the ocean, for example through Proudman resonance—a process in which long ocean waves grow strongly when the speed of the atmospheric disturbance matches the speed of tsunami waves, and (iii) coastal bathymetry capable of strongly amplifying long ocean waves. Funnel-shaped bays are particularly prone to meteotsunamis. These events can also be generated by explosive volcanic eruptions, such as the Hunga Tonga–Hunga Haʻapai eruption in January 2022, which produced a planetary-scale meteotsunami.

How is climate change expected to influence meteotsunamis?

At present, this is not well understood. Only two published studies exist, and both suggest a possible increase in meteotsunami intensity in the future due to an increased frequency of atmospheric conditions favorable for meteotsunami generation. However, no global assessment is currently available, as climate models are still unable to reliably reproduce the kilometer- or sub-kilometer-scale processes required to simulate meteotsunamis.

What are some of the recent advances in forecasting meteotsunamis?

Some progress has been made, but effective forecasting and early-warning systems for meteotsunamis remain far from operational. Improvements in atmospheric numerical models—currently the main source of uncertainty in meteotsunami simulations and forecasts—are expected in the coming decades, particularly through the development of new parameterization schemes that better represent turbulence-scale processes.

How does your review article differ from others that have covered meteotsunamis?

Our review introduces a new class of meteotsunamis generated by explosive volcanic eruptions.

The most recent comprehensive review of meteotsunamis was published nearly 20 years ago, making this review a timely synthesis of the substantial advances made over the past two decades. In addition, our review introduces a new class of meteotsunamis generated by explosive volcanic eruptions, such as the Hunga Tonga–Hunga Haʻapai event in January 2022. Such events were previously only sporadically noted, as the last comparable eruption occurred in 1883 with the Krakatoa volcano. Finally, recent findings show that meteotsunamis—much like seismic tsunamis—can radiate energy into the ionosphere, where it can be detected using ground-based GNSS (Global Navigation Satellite System) stations. This discovery opens a new avenue for future meteotsunami research.

What are some of the remaining questions where additional research efforts are needed?

Many challenges remain in the observation, reproduction, and forecasting of meteotsunamis. Most are closely linked to technological advancements, such as (i) the need for dense, continuous, minute-scale observations of sea level and meteorological variables across the ocean and over climate-relevant time scales, (ii) increased computational power, since sub-kilometer atmosphere–ocean models require enormous resources, potentially addressable through GPU acceleration or future quantum computing, and (iii) the development of improved parameterizations for numerical models at sub-kilometer scales. Ultimately, extending research toward climate-scale assessments of meteotsunamis is essential for accurately evaluating coastal risks associated with sea level rise and future extreme sea levels, which currently do not account for minute-scale oscillations such as meteotsunamis.

—Ivica Vilibić (Ivica.vilibic@irb.hr, 0000-0002-0753-5775), Ruđer Bošković Institute & Institute for Adriatic Crops, Croatia; Petra Zemunik Selak (0000-0003-4291-5244), Institute of Oceanography and Fisheries, Croatia; and Jadranka Šepić (0000-0002-5624-1351), Faculty of Science, University of Split, Croatia

Editor’s Note: It is the policy of AGU Publications to invite the authors of articles published in Reviews of Geophysics to write a summary for Eos Editors’ Vox.

Citation: Vilibić, I., P. Zemunik Selak, and J. Šepić (2026), Tsunamis from the sky, Eos, 107, https://doi.org/10.1029/2026EO265002. Published on 3 February 2026. This article does not represent the opinion of AGU, Eos, or any of its affiliates. It is solely the opinion of the author(s). Text © 2026. The authors. CC BY-NC-ND 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer