EOS

Syndicate content
Earth & Space Science News
Updated: 18 hours 1 min ago

Refining Remote Sensing of Dissolved Organic Carbon in Waterways

Fri, 06/22/2018 - 11:49

A pond full of decaying oak leaves soon turns as brown as tea. Eventually, much of that rotting organic matter is released into the atmosphere as carbon dioxide. Now, a new study could improve scientists’ ability to track such emissions by improving how satellites detect dissolved organic carbon (DOC) in freshwater.

Worldwide, inland waters such as rivers and lakes release about 1 billion tons of carbon as carbon dioxide (CO2) each year. By comparison, humans burning fossil fuels produced 9 billion tons of carbon as CO2 in 2010, about 20 times the weight of the world’s population at the time. Most of the emissions from freshwater bodies are produced by bacteria, which eat dissolved, microscopic specks of organic matter, digest them, and release greenhouse gas as a waste product.

Scientists are keen to track DOC using satellite imaging. Current methods use a colored component of dissolved organic matter (DOM). This metric, CDOM, can be used as a proxy because it can be visualized from space. However, it’s not clear how accurate this method is across different types of ecosystems, like pine forests and cornfields. In the new study, Li et al. ran a mesocosm experiment—an outdoor experimental system that studies the natural environment under controlled conditions—in parallel with sampling from 14 river outlets to see how reliable the ratio of DOC to CDOM really was.

The mesocosm experiment was conducted on Beaver Island, Lake Michigan. First, the authors filled six tanks with 500 gallons of clean lake water each and then put different types of leaf litter—corn, pine needles, and red maple—in mesh bags and tossed them into the tanks. They let the bags soak for 11 days, sampling the water for DOC and CDOM levels each day. To get a range of samples from the natural environment, the team also visited 14 river mouths across the Connecticut and Chippewa river watersheds. These were located in agricultural, deciduous, evergreen, and mixed ecosystems, in which different types of leaf litter found their way into the water.

Different types of leaf litter of the same biomass produced varying levels of DOC: Red maple leaves produced twice as much organic matter as the corn leaves did, for example. However, the ratio between DOC and CDOM for each type of vegetation litter stayed the same, increasing at a linear rate.

The finding fits with past studies showing that satellite CDOM measurements provide a reliable estimate of DOC, but only when a single type of vegetation dominates the watershed. In the Yukon River, where pine forests dominate, for example, the CDOM/DOM ratio remains steady as DOC increases. The method has not worked as well when used across large watersheds that include many different types of vegetation. The results of the new study indicate that scientists need to include the density and biomass of different types of vegetation in their models, according to the authors. (Journal of Geophysical Research: Biogeosciences, https://doi.org/10.1002/2017JG004179, 2018)

—Emily Underwood, Freelance Writer

The post Refining Remote Sensing of Dissolved Organic Carbon in Waterways appeared first on Eos.

New Version of Popular Climate Model Released

Fri, 06/22/2018 - 11:48

One year ago this month, climate researchers met at a workshop in Boulder, Colo., to fix a big glitch in the second version of the Community Earth System Model (CESM), a computer program that scientists around the world use to simulate Earth’s complex climate system. Last July, Eos reported on that glitch and the befuddlement it had caused the model’s developers.

Now, a year later, at the same annual CESM workshop, held this week again in Boulder, the team behind the model’s development has released the promised second version: CESM2. The new version offers a slew of new features that will help modelers explore the climate in far greater detail than CESM1 ever could.

“You’re driving this car, and you know it doesn’t work as well as it could.”The glitch, however, meant that the ride to this new version was not exactly smooth. Jean-François Lamarque, an atmospheric chemist at the National Center for Atmospheric Research (NCAR) who was the chief scientist behind CESM a year ago, likened the glitch to having car trouble: “You’re driving this car, and you know it doesn’t work as well as it could,” he said. Fixing it, he added, would take a great deal of work.

Fixing the Glitch

Lamarque and his team had hoped that CESM2 would debut in August of last year, but their CESM2 car kept sputtering. The issue arose when the program ran climate simulations and returned results that did not match those seen in reality—a problem if the main aim of the model is to mimic Earth’s actual climate.

“We spent 4–5 months really digging into the model.”Specifically, in CESM2 simulations, there was a stretch of about 2 decades in the middle of the 20th century that showed global temperatures minutely falling by 0.3°C or 0.4°C, despite real-world observations pointing toward a steady rise in global temperatures over the same 20-year period. This contrary trend occurred when the model calculated how sulfate aerosols changed the properties of clouds, a phenomenon known as the “aerosol indirect effect.” When sufficiently strong, this effect can cause cooling on a global scale.

To fix the glitch, a team of about 10 climate experts assembled soon after last year’s workshop to reexamine emissions data sets and to tinker with the model. “We spent 4–5 months really digging into the model,” Lamarque said.

A screenshot from a CESM2 simulation of the Arctic climate system. Warmer colors on the Greenland ice sheet indicate regions of faster ice flow. This simulation, which covered the end of the 20th century and the beginning of the 21st, shows that the model’s output matches observational data from satellites; that is, both show Arctic sea ice cover steadily decreasing over time. Credit: Alice DuVivier, Gunter Leguy, and Ryan Johnson/NCAR, ©UCAR

The researchers thoroughly reviewed how the model captured cloud-aerosol interactions and compared their parameterizations against current knowledge from observations and high-resolution simulations. Through that scrutiny, they identified several problems with their real-world emissions data. They reported these problems to the data suppliers, who then gave them a new, corrected version of the data. This work revealed that “our initial choice of parameters could, and should, be modified to reduce the strength of the aerosol indirect effect,” Lamarque explained.

Despite their efforts, the contrary trend still crops up in CESM2. “But it’s much, much reduced from last year,” Lamarque said, adding that it will take many more years of work “by very smart people” to untangle what is really going on under the model’s hood. The cloud-aerosol mechanism currently outputs a temperature drop of about 0.1°C, effectively curtailing the glitch by more than half.

New Ride

“We went from a standard car to a car with more features.”Despite that lingering glitch, CESM2 boasts several never-before-seen features. “We went from a standard car to a car with more features,” Lamarque said. These features “include quite substantial improvements in the representation of the physics that they are using,” added Gokhan Danabasoglu, an ocean and climate modeler at NCAR who is the current chief scientist behind CESM.

One of those new features is a capability that will allow users to model the behavior of Greenland’s ice sheet in greater detail. “You can have prognostic evolution of the Greenland ice sheet,” Danabasoglu said. This means that when the model runs, the parts of the ice sheet abutting the ocean melt at a relatively faster clip than ice farther inland, a process that more closely matches reality. This mechanism, Danabasoglu explained, is rather new among today’s climate models.

This week at the workshop in Boulder, researchers from around the world discussed the new features. One attendee, Gretchen Keppel-Aleks, an atmospheric scientist at the University of Michigan, described some of the features that she thinks will help advance her own research into the ways elements like carbon and nitrogen cycle through the environment.

“The new representation of carbon–nitrogen cycling in CESM2 will likely yield more robust projections for how terrestrial carbon cycling will change in the future,” she said. Such projections should help reduce one of the largest uncertainties for our future climate: how much anthropogenic carbon dioxide will remain in the atmosphere over time.  This, she said, means that CESM2 offers a “much more sophisticated framework compared to CESM1.”

Climate researchers, it seems, are liking their new wheels.

A full list of features new to CESM2 can be found on NCAR’s website.

—Lucas Joel (email: lucasvjoel@gmail.com), Freelance Journalist

The post New Version of Popular Climate Model Released appeared first on Eos.

Exploring a More Dynamic Arctic Icescape

Fri, 06/22/2018 - 11:47

One of the most tell-tale signs of climate change is the retreat of Arctic sea ice. The decline has been especially rapid in the most recent couple of decades, and long gone are the times when thick sea ice covered most of the Arctic Ocean even in summer. This thick old sea ice has now been largely replaced with thinner and younger sea ice.

The dynamics of the younger and thinner sea ice now covering the Arctic Ocean requires new understanding of key processes that drive sea ice changeThe recent rapid decline has not been well reproduced in climate models in part because most of our fundamental understanding of Arctic sea ice stems from observations done in an era with thick old ice. We believe that the dynamics of the younger and thinner sea ice now covering the Arctic Ocean is different and requires new understanding of key processes that drive sea ice change, for this to be better reproduced by climate models.

The well-known Norwegian polar explorer and scientist, Fridtjof Nansen, drifted across the Arctic Ocean in his custom-made ship the “Fram” in 1893-1896 and revolutionized our knowledge of the north polar region. Tellingly, a similar drift conducted during the International Polar Year in 2006/7/8, took roughly half as long, as ice drift speed has increased.

In the spirit of Nansen our research group designed a scientific campaign, the Norwegian young sea ICE cruise (N-ICE2015), in drifting sea ice in the Arctic Ocean, between Svalbard and the North Pole, in order to observe the functioning of a thinner sea ice pack.

Ice floes breaking up in response to storms made it sometimes challenging to work on the sea ice. Thick snow has also accumulated on the ice floes, accumulated during several storms prior to when this photo was taken. Credit: Tor Ivan Karlsen / Norwegian Polar Institute

For this we used the ice-strengthened research vessel “Lance” as our base, conducting scientific observations from the nearby ice floes while drifting with the ice.

N-ICE2015 took place in the winter and spring of 2015, and we battled fierce winter storms, rapid ice drift, break-up of ice floes and the occasional curious polar bear that wanted to sniff our equipment.

All this, while working on sea ice only three to five feet thick, that has become the norm in this region.

Many of the results of our research campaign are now published in a joint special issue of JGR: Oceans, JGR: Atmospheres and JGR: Biogeosciences. Together, this collection provides a comprehensive examination of how the now thinner sea ice responds to forcing from atmosphere (winds, precipitation and air temperatures), and how this in turn affects the ice pack (growth, drift and deformation), affects the mixing in the ocean below and eventually influences the marine ecosystem. In a coupled system all these processes are interlinked and affect each other, creating complex feedbacks.

Multiple papers in the special issue show significant effects from short-lived but fierce storms. These frequently pass through this region. Storms bring high wind speeds, warm air and moisture to a place that is otherwise cold, dry and stable. Although short-lived (only a couple of days) storms create such intense dynamics that, for example, the air sea exchange of carbon dioxide on seasonal time scales is governed by multiple short-lived storms. Storms accumulate a deep snow pack that insulates the ice from the cold atmosphere and the ice grows thinner. The thinner ice pack is also weaker and more easily responds to wind forcing, affecting ice drift, which in turn can transfer more energy to the ocean below and allow the mixing of heat in the ocean to melt the underside of the sea ice, even in midst of winter. At most intense these processes are during or directly after storms.

Results from this experiment can serve as a guide for what to focus on in the next generation of models.These fundamental processes typically occur on very short time scales but, more importantly, also act on very small scales (order of meters rather than kilometers), much smaller than at what typical climate models can resolve processes. These need to be carefully represented in models to make realistic predictions of Arctic sea ice in the future. Results from this experiment can serve as a guide for what to focus on in the next generation of models.

—Mats A. Granskog, Norwegian Polar Institute; email: mats.granskog@npolar.no

The post Exploring a More Dynamic Arctic Icescape appeared first on Eos.

New Strategies to Protect People from Smoke During Wildfires

Thu, 06/21/2018 - 12:05

The current strategy to protect human lives and property during wildfires is to focus on the fire itself but this is only part of the impact on human health. Inhaling fine particulates in the smoke generated by wildfires causes significant and sometimes severe health impacts, especially for vulnerable populations such as children, the elderly, and those with pre-existing conditions. We are currently hampered in our ability to monitor this public health threat because we have so few air quality monitors—satellite sensors typically lack the spatial resolution to provide community-level protections.

Gupta et al. [2018] capitalize on the revolution in the development of low cost particulate monitors to validate on-the-ground measurements with satellite proxies for fine particulates during the 2017 wine country fire in Northern California. They reveal the ability of the ground-level sensor array to identify particulate matter hotspots, and to track the movement of these hotspots as the fire evolved. They also revealed some of the limitations in both ground level monitors, namely relatively low data quality and instrumental variations, and satellite proxy measurements, namely the proxy calibration to actual particulate matter concentration at ground level. Even with these limitations, the sheer density of individual measurements can balance out instrumental bias and provide a critical new tool to protect human health during wildfire or other smoke events in areas without current monitoring capabilities.

Gupta, P., Doraiswamy, P., Levy, R., Pikelnaya, O., Maibach, J., Feenstra, B., et al. [2018]. Impact of California fires on local and regional air quality: The role of a low‐cost sensor network and satellite observations. GeoHealth, 2. https://doi.org/10.1029/2018GH000136

—Gabriel Filippelli, Editor-in-Chief, GeoHealth

The post New Strategies to Protect People from Smoke During Wildfires appeared first on Eos.

Climate Research Funding Still Under Threat, Report Warns

Thu, 06/21/2018 - 12:04

Despite recent congressional appropriations that reversed many of the Trump administration’s efforts to reduce science funding for current fiscal year 2018, a new report raises an alarm about what it says are the administration’s attacks on climate research and funding for it.

Ernest Moniz (right), who served as secretary of energy in the Obama administration, speaks about climate science research and threats to funding with John Podesta, founder and director of the Center for American Progress. Credit: Constance Torian/Center for American Progress

“The Trump administration’s budget proposals and explicit attacks on science, scientists, and scientific norms indicate their intent is to undermine not just individual programs, but the entire scientific process, and in so doing to cast doubt upon the severity of the climate challenge facing the United States and the world,” according to the report, titled “Burning the Data: Attacks on Climate and Energy Data and Research.” The Center for American Progress (CAP), a left-leaning think tank based in Washington, D. C., issued the report on 14 June.

The report cautions that even though Congress passed legislation in March to maintain or increase science funding for a number of federal agencies, political appointees have broad discretion to reprogram funding away from climate change–related activities, leave funds unspent, and make policy changes to alter how science is used in federal decision-making, among other measures.

Funding cuts or shifts in spending could create gaps in data for U.S. and international climate studies, according to the report. It notes “the critical importance of the federal budget process to building and maintaining the foundation of domestic and international climate and energy research.”“There is a lack of transparency in the budgeting process that will make this an extraordinary challenge for Congress and those compelled to protect the data necessary to protect the planet.”

Appropriating the Dollars “Isn’t Enough”

“Simply appropriating the dollars just isn’t enough,” said Christy Goldfuss, CAP’s senior vice president for energy and environmental policy, at a 14 June briefing to discuss the report.

“There is a lack of transparency in the budgeting process that will make this an extraordinary challenge for Congress and those compelled to protect the data necessary to protect the planet,” she said.

A Call for Vigilance Beyond the Appropriations Process

Ernest Moniz, who served as secretary of energy during the Obama administration, said at the event that “a state of vigilance is required beyond the appropriations process” and that “international concerns already have been expressed about what is going to happen if the United States creates data gaps” in climate studies.“The things that should be completely noncontroversial are the underlying data to understanding what’s happening to the Earth system.”

“The things that should be completely noncontroversial are the underlying data to understanding what’s happening to the Earth system,” said Moniz, now a principal with the Washington, D. C.–based Energy Futures Initiative. And yet, he explained, it is concerning that those underlying data could be in jeopardy.

“It doesn’t matter if you choose the frankly completely unsupportable decision about questioning the need to respond to global warming in a policy sense,” Moniz said. “No matter where you stand on that, it is completely illogical to not want to see those data continue, unless, frankly, you don’t have a pursuit for the truth and for the necessary responses at the heart of what you are doing.”

—Randy Showstack (@RandyShowstack), Staff Writer

The post Climate Research Funding Still Under Threat, Report Warns appeared first on Eos.

Can We Crack the Climate Code of the Southern Polar Region?

Wed, 06/20/2018 - 12:20

Comprehensive Earth system models (ESMs) and climate models are the main tools available for quantitative projections of future climate change and likely physical outcomes. However, diagnosing Southern Hemisphere model performance is difficult because of the spatial sparseness of field data and remaining uncertainties in reconstructions of recent real-world climate conditions. These factors limit the evaluation of ESMs and thus the reliability of their projections, especially at high spatial resolution.

The principal outcome of the workshop is a community agreement on an ensemble of metrics.To address this need, scientists from more than 17 countries, including 29 early-career scientists, gathered last October at the Scripps Institution of Oceanography for the #GreatAntarcticClimateHack, a workshop funded by the Scientific Committee on Antarctic Research’s Antarctic Climate Change in the 21st Century (AntClim21) initiative. Attendees’ intent was to decide on metrics to evaluate ESMs to improve the next generation of Intergovernmental Panel on Climate Change projections for Antarctica and the Southern Ocean. Participants included leading experts in oceanography, glaciology, atmospheric research, aquatic biogeochemistry, and biology working on past reconstructions, modern observations, and future projections.

The principal outcome of the workshop is a community agreement on an ensemble of metrics. These metrics were produced and prioritized using a bottom-up approach that allowed contributors from different disciplines to identify key aspects of model evaluation that are most important for their area of science. At the workshop, in-depth sessions were conducted on the atmosphere, ocean, sea ice, ice sheets, paleoreconstructions, ecosystems, and biogeochemistry. Discussions finalizing diagnostic tools for implementing the range of metrics are ongoing.

Key multidisciplinary insights that emerged from the workshop include the following:

Ocean subpolar gyres around Antarctica influence key aspects of coupled systems. For example, these gyres transport water masses to the Antarctic coastline, where they can interact with the ice sheets. The gyres are also critical for the dispersal of nutrients, the transport of sea ice, and the vertical mixing of water masses. Penguins that rely on these gyres for their seasonal migration will help researchers evaluate the representation of such gyres because it is now possible to outfit the penguins with global location–sensing (GLS) biologgers, which use solar cues to determine location.Sea ice connects many disciplines through its interaction with the atmosphere, ocean, and ecosystems. Ice-ocean interactions at the grounding line (where a glacier on land extends into the water, becoming a floating ice shelf) are a principal challenge for determining ice mass loss from marine-based ice sheets. Emerging integration of grounding line behavior in coupled ice sheet models and the recent successes in drilling projects to access water masses near the grounding line provide unprecedented opportunities to assess model performance and refine Antarctic contributions to sea level rise. Sea ice connects many disciplines through its interaction with the atmosphere (winds), ocean (temperature and currents), and ecosystems (nutrients and light). Including new data metrics, such as those from GLS tags attached to penguins and the deployment of under-ice Argo floats, provides exciting new constraints to improve model performance.

Penguins, like these near the Collins Glacier on King George Island, Antarctica, provide one of several sources of metrics to help improve Earth system models and climate models for the Southern Hemisphere. Credit: Alia Lauren Khan

In the upcoming World Climate Research Programme’s Climate Model Intercomparison Project Phase 6 (CMIP6), routine benchmarking and evaluation will be a key advance on previous CMIP exercises. Meeting participants agreed that the Earth System Model Evaluation Tool (ESMValTool) will play a valuable contributing role in CMIP. ESMValTool aims to facilitate the evaluation of comprehensive ESMs, raise the standard for model evaluation, and facilitate participation in, and analysis of, CMIP6 and related initiatives. Subsets of the metrics discussed at the workshop are being developed for implementation as part of an Antarctic and Southern Ocean contribution to ESMValTool.

More details can be found at the workshop’s website.

—Alia L. Khan (email: alia.khan@colorado.edu; @AliaLaurenKhan), National Snow and Ice Data Center, Cooperative Institute for Research in Environmental Sciences, University of Colorado Boulder; Thomas J. Bracegirdle, British Antarctic Survey, Cambridge, U.K.; and Joellen L. Russell, University of Arizona, Tucson

The post Can We Crack the Climate Code of the Southern Polar Region? appeared first on Eos.

The Oxygen Neutral Cloud Surrounding Jupiter’s Volcanic Moon

Wed, 06/20/2018 - 12:19

Lakes of lava and hundreds of volcanoes dot the surface of Jupiter’s moon Io, some spewing lava dozens of kilometers into the air. Only slightly larger than our own planet’s moon, Io is the most volcanically active place in the solar system. Its thin atmosphere is made up largely of sulfur oxides. As Io orbits, neutral gas particles escape its atmosphere and collide with electrons, giving rise to a donut-shaped cloud of ionized particles around Jupiter, known as the Io plasma torus.

Exactly how those neutral gases escape Io’s atmosphere is not well understood, however. Previous studies have shown that most atomic oxygen and sulfur escape Io’s atmosphere by colliding with energetic particles, such as torus ions, which bump the particles out of the atmosphere in a process known as atomic sputtering. Some of the particles escape from Io’s gravity and form clouds of neutral sulfur and oxygen. Here Koga et al. provide new insights into the role of the neutral cloud in the Io plasma torus.

The team took advantage of data collected by Japan’s Hisaki satellite, which launched in 2013 and became the first space telescope to observe planets like Mars and Jupiter from Earth’s orbit. The researchers used spectrographic data from the Extreme Ultraviolet Spectroscope for Exospheric Dynamic (EXCEED) instrument aboard the satellite to measure atomic emissions at 130.4 nanometers around Io’s orbit. The measurements were collected over 35 days between November and December 2014, a relatively calm volcanic period for the moon.

The authors found that Io’s oxygen cloud has two distinct regions: a dense area that spreads inside Io’s orbit, called the “banana cloud,” and a more diffuse region, which spreads all the way out to 7.6 Jovian radii (RJ). The team plugged the satellite observations into an emissions model to estimate the atomic oxygen number density. They found more oxygen inside Io’s orbit than previously thought, with a peak density of 80 atoms per cubic centimeter at a distance of 5.7 RJ. The team also calculated a source rate of 410 kilograms per second, which is consistent with previous estimates.

This study provides the first good look at Io’s neutral cloud, which has historically been too dim to measure. Neutral particles from Io’s atmosphere are one of the primary sources for charged particles in Jupiter’s massive magnetosphere. Ultimately, the authors note, a better understanding of the neutral cloud will provide important insights into the gas giant’s magnetosphere. (Journal of Geophysical Research: Space Physics, https://doi.org/10.1029/2018JA025328, 2018)

—Kate Wheeling, Freelance Writer

The post The Oxygen Neutral Cloud Surrounding Jupiter’s Volcanic Moon appeared first on Eos.

Honoring Earth and Space Scientists

Wed, 06/20/2018 - 12:18

On 1 May, the National Academy of Sciences elected 84 new members and 21 foreign associates, several of whom are members of the Earth and space science community. Newly elected members and their affiliations at the time of election are as follows: David Bercovici, Frederick W. Beinecke Professor of Geology and Geophysics at Yale University in New Haven, Conn.; Kristie A. Boering, professor of chemistry and of Earth and planetary science at University of California, Berkeley; James F. Kasting, Evan Pugh Professor in the Department of Geosciences at Pennsylvania State University, University Park; Michael Manga, professor of Earth and planetary sciences at University of California, Berkeley; Eric J. Rignot, Donald Bren Professor of Earth System Science at University of California, Irvine; Diana Harrison Wall, senior research scientist in the Natural Resource Ecology Laboratory and professor of biology and director of the School of Global Environmental Sustainability, Colorado State University, Fort Collins; and Cathy L. Whitlock, professor of Earth sciences at Montana State University in Bozeman and fellow of the Montana Institute on Ecosystems.

The post Honoring Earth and Space Scientists appeared first on Eos.

Explosive Volcanoes Spawned Mysterious Martian Rock Formation

Tue, 06/19/2018 - 12:31

Explosive volcanic eruptions that shot jets of hot ash, rock and gas skyward are the likely source of a mysterious Martian rock formation, a new study finds. The new finding could add to scientists’ understanding of Mars’s interior and its past potential for habitability, according to the study’s authors.

The Medusae Fossae Formation is a massive, unusual deposit of soft rock near Mars’s equator, with undulating hills and abrupt mesas. Scientists first observed the Medusae Fossae with NASA’s Mariner spacecraft in the 1960s but were perplexed as to how it formed.

Now, new research suggests the formation was deposited during explosive volcanic eruptions on the Red Planet more than 3 billion years ago. The formation is about one-fifth as large as the continental United States and 100 times more massive than the largest explosive volcanic deposit on Earth, making it the largest known explosive volcanic deposit in the solar system, according to the study’s authors.

“This is a massive deposit, not only on a Martian scale, but also in terms of the solar system, because we do not know of any other deposit that is like this,” said Lujendra Ojha, a planetary scientist at Johns Hopkins University in Baltimore and lead author of the new study published in the Journal of Geophysical Research: Planets, a journal of the American Geophysical Union.

This graphic shows the relative size of the Medusae Fossae Formation compared to Fish Canyon Tuff, the largest explosive volcanic deposit on Earth. The Medusae Fossae has an area of about 2 million square kilometers, which is roughly one-fifth the size of the continental United States. Fish Canyon Tuff, when it was deposited, covered an area of about 30,000 square kilometers, roughly the size of the state of Maryland. Credit: AGU.

Formation of the Medusae Fossae would have marked a pivotal point in Mars’s history, according to the study’s authors. The eruptions that created the deposit could have spewed massive amounts of climate-altering gases into Mars’s atmosphere and ejected enough water to cover Mars in a global ocean more than 9 centimeters (4 inches) thick, Ojha said.

Greenhouse gases exhaled during the eruptions that spawned the Medusae Fossae could have warmed Mars’s surface enough for water to remain liquid at its surface, but toxic volcanic gases like hydrogen sulfide and sulfur dioxide would have altered the chemistry of Mars’s surface and atmosphere. Both processes would have affected Mars’s potential for habitability, Ojha said.

Determining the Source of the Rock

The Medusae Fossae Formation consists of hills and mounds of sedimentary rock straddling Mars’s equator. Sedimentary rock forms when rock dust and debris accumulate on a planet’s surface and cement over time. Scientists have known about the Medusae Fossae for decades, but were unsure whether wind, water, ice or volcanic eruptions deposited rock debris in that location.

A global geographic map of Mars, with the location of the Medusae Fossae Formation circled in red. Click image for larger version. Credit: MazzyBor, CC BY-SA 4.0 via Wikimedia Commons

Previous radar measurements of Mars’s surface suggested the Medusae Fossae had an unusual composition, but scientists were unable to determine whether it was made of highly porous rock or a mixture of rock and ice. In the new study, Ojha and a colleague used gravity data from various Mars orbiter spacecraft to measure the Medusae Fossae’s density for the first time. They found the rock is unusually porous: it’s about two-thirds as dense as the rest of the Martian crust. They also used radar and gravity data in combination to show the Medusae Fossae’s density cannot be explained by the presence of ice, which is much less dense than rock.

Because the rock is so porous, it had to have been deposited by explosive volcanic eruptions, according to the researchers. Volcanoes erupt in part because gases like carbon dioxide and water vapor dissolved in magma force the molten rock to rise to the surface. Magma containing lots of gas explodes skyward, shooting jets of ash and rock into the atmosphere.

A 13-kilometer (8-mile) diameter crater being infilled by the Medusae Fossae Formation. Credit: High Resolution Stereo Camera/European Space Agency.

Ash from these explosions plummets to the ground and streams downhill. After enough time has passed, the ash cements into rock, and Ojha suspects this is what formed the Medusae Fossae. As much as half of the soft rock originally deposited during the eruptions has eroded away, leaving behind the hills and valleys seen in the Medusae Fossae today.

Understanding Mars’s Interior

The new findings suggest the Martian interior is more complex than scientists originally thought, according to Ojha. Scientists know Mars has some water and carbon dioxide in its crust that allow explosive volcanic eruptions to happen on its surface, but the planet’s interior would have needed massive amounts of volatile gases—substances that become gas at low temperatures— to create a deposit of this size, he said.

“If you were to distribute the Medusae Fossae globally, it would make a 9.7-meter (32-foot) thick layer.” Ojha said. “Given the sheer magnitude of this deposit, it really is incredible because it implies that the magma was not only rich in volatiles and also that it had to be volatile-rich for long periods of time.”

The new study shows the promise of gravity surveys in interpreting Mars’s rock record, according to Kevin Lewis, a planetary scientist at Johns Hopkins University and co-author of the new study. “Future gravity surveys could help distinguish between ice, sediments and igneous rocks in the upper crust of the planet,” Lewis said.

The post Explosive Volcanoes Spawned Mysterious Martian Rock Formation appeared first on Eos.

Bulging, Shrinking, and Deformation of Land by Hydrologic Loading

Tue, 06/19/2018 - 12:30

The Gravity Recovery and Climate Experiment (GRACE) satellite mission has proved very useful for tracking fluctuations in terrestrial water storage, but suffers from very low spatial resolution, meaning only broad fluctuations across hundreds of kilometers can be detected. Karegar et al. [2018] present a hybrid inverse modeling approach that combines GRACE data with local GPS measurements of vertical ground surface displacements and a high-resolution hydrologic model. Combining these three independent data sources within two mathematical approaches, each component based on different assumptions and principles, the authors bridge scales and produce more precise estimates of terrestrial water mass variations. This multi-faced inversion approach can provide better constraints on water cycle processes, inform improved hydrologic modeling and contribute to climate monitoring efforts.

Citation: Karegar, M. A., Dixon, T. H., Kusche, J., & Chambers, D. P. [2018]. A new hybrid method for estimating hydrologically induced vertical deformation from GRACE and a hydrological model: An example from Central North America. Journal of Advances in Modeling Earth Systems, 10. https://doi.org/10.1029/2017MS001181

—Paul A. Dirmeyer, Editor, JAMES

The post Bulging, Shrinking, and Deformation of Land by Hydrologic Loading appeared first on Eos.

Constraining Central Washington’s Potential Seismic Hazard

Tue, 06/19/2018 - 12:29

Off the United States’ northwest coast the Juan de Fuca Plate is diving beneath North America along the Cascadia subduction zone. Ensuing crustal shortening has created the Yakima Fold Province, a seismically active region in central Washington where deformation is focused along a series of arch-shaped anticlinal folds. The relative timing and rate of deformation along individual structures in this region are still poorly constrained, however, despite its potential to unleash earthquakes at least as powerful as the M6.8 Entiat event that occurred there in 1872.

To better understand the province’s tectonic history, Staisch et al. used several independent but complementary methods to analyze the signatures of deformation focused along three anticlines in the Yakima Canyon region. By employing stream profile inversions, balanced cross sections, and geophysical mapping techniques, the team was able to constrain local slip rates and fault geometries and use the results to calculate the amount of time required for each fault to accumulate enough strain to rupture.

The results indicate that stream incision rates accelerated during the Pleistocene, a change the team attributes to tectonically driven uplift rather than climate. The researchers also estimate that modern slip rates range between 0.4 and 0.5 millimeter per year, with motion accommodated along reverse faults coring each anticline, and that the region has been compressed by a total of 3.5 kilometers (11.5%) since the mid-Miocene.

The team’s calculations indicate that large earthquake (M ≥ 7) seismic events could recur as often as every 200 to 6,000 years on faults within the fold province, a region that was previously considered aseismic. This study demonstrates how independent analyses of diverse data sets can complement one other and collectively improve our understanding of deformation history, as well as help estimate potential hazards, in seismically active regions. (Tectonics, https://doi.org/10.1029/2017TC004916, 2018)

—Terri Cook, Freelance Writer

The post Constraining Central Washington’s Potential Seismic Hazard appeared first on Eos.

Rethinking the River

Tue, 06/19/2018 - 12:28

A complex set of pressing management concerns is driving a shift in the ways that science and management are coupled in the Mississippi River Delta region and how they provide feedback to inform each other. This shift has its origins in a decades-long effort to understand and restore the Mississippi River Delta, but the management concerns that are driving the design of model-intensive scientific research campaigns have brought the issue to the fore.

New and evolving concerns include maintaining shipping and commerce in the Mississippi River, restoring habitats in the river’s delta, providing protection from river floods and storm surges, reducing the effects of hypoxia (oxygen depletion) in the waters along the continental shelf, and recovering from the BP Deepwater Horizon oil spill.

Science and management partnerships on the Mississippi provide a model for research and management that can be applied to deltas and coasts worldwide.Holistically addressing these concerns requires the best science available and, as such, has created new opportunities for research. These opportunities include the coupled development of multiple science programs focused on the Mississippi River and the Gulf of Mexico, research that has led to shifts in our conceptualization of how North America’s largest river functions and how it interacts with its delta and the ocean.

This empirically based, multidisciplinary approach also has applications beyond the banks of one river. Globally, deltas and coastal systems are home to billions of people, major centers of biodiversity, and appealing locations for commerce. These systems face a range of environmental threats, and they are naturally dynamic landscapes [e.g., Giosan et al., 2014]. Thus, science and management partnerships on the Mississippi provide a model for research and management that can be applied to deltas and coasts worldwide.

The Mississippi River Delta Region

The Mississippi River has the world’s third-largest watershed, sixth-largest freshwater discharge, and fifth-largest delta [Kolker et al., 2013, and references therein]. Here we define the Mississippi River Delta region as the area extending from the Mississippi River’s distributary avulsion point to the continental slope (Figure 1).

Fig. 1. Map of the lower Mississippi River and its delta, showing major distributaries and their annual discharge volume (cubic kilometers). The image on the right shows a detailed view of the bird’s-foot delta region at the river’s mouth. Credit: NASA Landsat and Google Earth

The regions around the lower Mississippi River and its delta are home to nearly 2 million people with a unique cultural heritage. This area also hosts one of Earth’s largest port complexes, a massive energy industry, and nearly a third of the United States’ seafood production [Louisiana Coastal Protection and Restoration Authority (LACPRA), 2017].

However, these enterprises are at risk because the Mississippi River Delta and associated environments have lost nearly 20% of their coastal wetland area over the past century because of a range of factors that include subsidence, global sea level rise, the construction of canals, reduced freshwater input and sediment deposition, hurricane strikes, and oil spill impacts [LACPRA, 2017]. The area is under threat to lose an equivalent amount in the next 50 years [LACPRA, 2017].

How Leaky Is the Pipe?

Our understanding of water transport in the lower Mississippi River (the region downstream of the Atchafalaya distributary) is changing rapidly. Figuring out how things are changing is a critical research need, given the importance of maintaining water flows to support navigation in the Mississippi River channel, the need to protect against river floods, the relevance of this information for basic deltaic hydrogeology and restoration, and the goals of large-scale river management worldwide.

Recent studies indicate that almost half of the river’s water leaves the channel north of the lower Mississippi River’s mouth at Head of Passes.Until recently, the prevailing view was that extensive flood control levees caused the lower Mississippi River to function like a pipe: It was assumed that water flowed through it efficiently and there was little exchange of water, sediment, and dissolved constituents between the river and the delta plain. Although levees do prevent freshwater and sediment from reaching large areas of the delta plain, recent studies indicate that almost half of the river’s water leaves the channel north of the lower Mississippi River’s mouth at Head of Passes, where the main stem of the river branches off into three distinct directions and creates a bird’s-foot delta at the river’s mouth (Figure 1) [Allison et al., 2012]. Most of this outflow occurs in the lowermost 75 kilometers of the river through natural and man-made exits having capacities that range from about 100 to 4,000 cubic meters per second (Figure 1) [Allison et al., 2012]. In some cases, the flow magnitude is increasing at individual exits upstream of the bird’s-foot delta [Suir et al., 2014].

The Mississippi River is flanked by levees that severely restrict the exchange of freshwater, sediments, and nutrients between the river and its delta. Seen here is the river, surrounded by wetlands that are restricted from interacting with it. Credit: Alexander S. Kolker and Southwings

Some scientists see this as indicative of an early phase of channel realignment in which some of the major river distributaries shift northward [Kemp et al., 2014]. The abovementioned studies, coupled with recent work indicating groundwater discharge from the Mississippi River to the coastal zone [Kolker et al., 2013], suggest that the system functions more like a leaky pipe than an unbroken conduit connecting the land and the sea.

Sediment Transport, Then and Now

Research has also transformed how researchers and stakeholders understand sediment dynamics in the Mississippi River Delta system. Previously, the dominant view was that most sediments from the lower Mississippi River were shunted into the deep waters of the Gulf of Mexico, a view that appeared to be consistent with remotely sensed imagery showing large surface sediment plumes seaward of the river’s mouth (Figure 2) [Allison et al., 2012, and references therein]. However, a detailed sediment budget [Allison et al., 2012] indicates that less than 50% of the sediment load in the lower Mississippi River is transported through the Southwest and South passes—the major deepwater discharging outlets—in part because these sediments are settling out of the river’s water column and are aggrading on the channel floor [Little and Biedenharn, 2014].

Fig. 2. These true-color images from the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard NASA’s Aqua satellite show the Mississippi River plume and the Louisiana continental shelf during periods of (a) low, (b) medium, and (c) and (d) high discharge as measured by the U.S. Geological Survey at Belle Chasse, La. (located in the center of each image). Note that Figure 2(d) captures a time when the Bonnet Carre Spillway, a flood control structure that shunts Mississippi River water eastward to Lake Pontchartrain and beyond, was active. White arrows in the lower right corners indicate wind direction. Salinity (conductivity, measured as practical salinity units) and wind data are based on output from the National Oceanic and Atmospheric Administration’s Northern Gulf of Mexico Operational Forecast System Finite Volume Coastal Ocean Model (NGOFS FVCOM).

Today’s conditions reflect system-scale shifts that may be coupled to changes in channel dynamics. The channel shifted from a transport-limited system (the amount of sediment transported is limited by the stream power’s ability to transport the plentiful sediment supply) during the 19th century to a supply-limited system in the 20th century (stream power is sufficient to transport the limited sediment supply).

This shift happened as the result of a series of engineering efforts between the late 19th and mid-20th centuries, which converted almost the entire Mississippi River channel south of the confluence with the Ohio River into a naturally dredged self-scouring system [Alexander et al., 2012]. This series of engineering efforts included shortening the river and adding levees that limited overbanking and increased velocities, thereby enhancing sediment export.

The channel was largely in an erosive state during the middle part of the 20th century; many reaches are now either in a dynamic equilibrium or aggrading (adding sediment) today.Now, the channel could be shifting back to a transport-limited system [Alexander et al., 2012]. An analysis of lower Mississippi River channel bathymetries from the 1960s through the 2000s indicates that whereas the channel was largely in an erosive state during the middle part of the 20th century, many reaches are now either in a dynamic equilibrium or aggrading (adding sediment) today [Little and Biedenharn, 2014].

Questions of channel dynamics are of critical concern in the years ahead because of a phenomenon known as shoaling, in which the river channel becomes more shallow. Shoaling could accelerate if relative sea level rise reduces the slope of the Mississippi River or if changes to the volume of water carried by the river’s distributaries occur. Either of these scenarios could affect navigation and commerce in this critically important pathway.

River and Delta Dynamics

Across the region, efforts are under way to study the coupling between the Mississippi River and its delta. One such program is Louisiana’s Comprehensive Master Plan for a Sustainable Coast—a 50-year, $50 billion effort that, if fully implemented, is predicted to build or maintain more than 2,070 square kilometers of land and reduce storm damages by nearly $150 billion [LACPRA, 2017]. Such coastal restoration plans rely heavily on partially diverting the flow of the river to bring new sediments and freshwater into the subsiding basin [LACPRA, 2017; Fisk et al., 1954; Twilley et al., 2016]. Recent research has revealed multiple new insights.

The flow through a gated, controlled diversion can potentially be optimized to maximize land building and reduce adverse impacts, such as shoaling in the Mississippi River or eutrophication in the receiving basin.First, systems that function like the river diversions proposed in Louisiana’s coastal master plan can deposit enough sediment to match (~1–3 centimeters thick per seasonal flood), or in some cases exceed, the high rates of relative sea level that exist across the region (~1–3 centimeters per year [Esposito et al., 2013; LACPRA, 2017]). Although sediment budgets indicate that there is insufficient sediment to rebuild the entire delta over the plan’s 50-year period [Allison et al., 2012], extensive modeling conducted as part of Louisiana’s coastal restoration efforts indicates that sediment delivery through diversions can provide enough material to rebuild and maintain some critical areas [LACPRA, 2017, and references therein].

Second, the flow through a gated, controlled diversion can potentially be optimized to maximize land building and reduce adverse impacts, such as shoaling in the Mississippi River or eutrophication in the receiving basin [Peyronnin et al., 2017]. If the operations strategy is not well managed, inundation associated with diverting freshwater into existing marshes has the potential to affect their productivity [Snedden et al., 2015]. However, despite these strong river inputs, hydrodynamics in many diversion-receiving basins are heavily influenced by winds and offshore forcings [Roberts et al., 2015], environmental complexities that require robust models to support management decisions.

Port Fourchon in the lower Mississippi River Delta, the largest offshore-servicing oil and gas port in the Gulf of Mexico. This port, surrounded by wetlands and open water, exemplifies the complex network of competing demands of ecosystems, infrastructure, and offshore activities in the Mississippi River Delta region. Credit: Alexander S. Kolker and Southwings

Research is also changing the community’s view of the Mississippi River plume, particularly the development of the seasonal (summer) hypoxic zone. Although hypoxia has long been considered detrimental to fisheries, recent modeling studies indicate that the stimulative effect of nutrient enrichment on fisheries biomass is often greater than the negative effects of hypoxia caused by this enrichment [de Mutsert et al., 2016].

Furthermore, the size and shape of the plume are governed by factors other than Mississippi River discharge, the primary contributor to the hypoxia forecast; these factors include winds, storms, and fronts [Justic and Wang, 2014]. The complexity associated with all of these factors indicates the need for, and benefit of, complex, multidimensional models to inform complex management decisions in large-scale systems [e.g., Meselhe et al., 2016; LACPRA, 2017].

Applying the Science to Decision-Making

Restoring the Mississippi River Delta and associated environments is a major policy objective. This delta, as with other large rivers worldwide, is a major center for human population, transportation, industry, and critical ecosystem services [LACPRA, 2017]. The Mississippi River Delta and many of these systems are similarly threatened by relative sea level rise [Giosan et al., 2014]. As such, findings from the Mississippi River Delta and its coastal zone have the potential to influence science and inform decision-making on a global scale.

One particularly critical area is the planned diversions of the Mississippi River, which are designed to restart natural deltaic land-building processes.Issues facing the Mississippi River Delta involve multiple interacting natural and anthropogenic components. Management actions are required to maintain or expand functionality in such critical areas as navigation, energy production, and fisheries while also restoring damaged habitats and providing flood protection for coastal communities. Restoration, protection, and river management decisions require the use of a multitude of models linked through a series of computational inputs and outputs. These models are exemplified by their use in Louisiana’s coastal master plan.

One particularly critical area is the planned diversions of the Mississippi River, which are designed to restart natural deltaic land-building processes. Models are being developed to predict locations in the river and in the receiving waters where diversions will be most beneficial to land building [Meselhe et al., 2016]. These modeling efforts examine how diverting the Mississippi River could induce river shoaling, which would be hazardous to navigation. Modeling efforts also examine how to optimize the amount of water and sediment diverted from the river [Peyronnin et al., 2017]. Models of diversions also provide insights into the salinity and hydrodynamics of coastal bays, water quality, and fisheries [LACPRA, 2017, and references therein].

The Mississippi River cuts through the center of this image as it flows from downstream, from left to right. Examples of the multiple uses of the river and surrounding environments can be seen, including petrochemical facilities (circular tanks on the left side of photograph and left side of the river), navigation and shipping (note barges along the river and the railroad on the river side of the river), and residential housing (small buildings on both sides of the river). Also present along the right bank of the river and on the right of the image is the outfall channel of the Davis Pond Freshwater Diversion, a small (30–300 cubic meter per second) diversion that is part of environmental restoration and management activities. Balancing the needs of navigation and ecosystem restoration in an era of high subsidence and accelerating global sea level rise requires science-informed management solutions. Credit: Alexander S. Kolker and Southwings

Entirely different sets of models are used to evaluate how restoration and protection features affect storm surge dynamics changes to land area, marsh type, flood risk, inundation depths, carbon sequestration, and economic impacts [LACPRA, 2017, and references therein]. These models are informed by data from local platforms monitoring salinity, water levels, and shallow subsidence rates, as well as by global climate models and global sea level projections, all of which affect model projections of storm surge and land building [LACPRA, 2017, and references therein].

All of these models are integrated into a comprehensive modeling framework that is used to inform science-based decision-making processes.

Applying the Concepts Around the World

Complex environmental management in large river systems requires broad-based and complex science, engineering, and monitoring.The Mississippi River Delta region is similar to many other large river deltaic systems that also experience high rates of subsidence and have accelerating rates of global sea level rise, are subject to altered water and sediment fluxes, and are home to large human populations. Such regions include the delta systems of the Ganges-Brahmaputra, Yellow, Danube, Nile, Tigris-Euphrates, Fly, and Po rivers [Giosan et al., 2014].

The approach being implemented in the Mississippi River Delta demonstrates how complex environmental management in large river systems requires broad-based and complex science, engineering, and monitoring. With regard to delta management worldwide [e.g., Giosan et al., 2014], robust research can lead to paradigm shifts in understanding how large river systems function and interact with the ocean.

Coastal and deltaic research programs must continue to evolve, especially by incorporating ongoing changes in global sea level rise rates, to ensure that information and outcomes reflect emerging changes in landscape sustainability and human safety. Ultimately, the feedback between management and research that is ongoing in the northern Gulf of Mexico is a framework that can be applied worldwide.

Acknowledgments

This work was partially supported by the National Oceanic and Atmospheric Administration’s RESTORE Act Science Program under award NA15NOS4510229 to A.S.K. at the Louisiana Universities Marine Consortium.

The post Rethinking the River appeared first on Eos.

AGU Launches Its Centennial Celebration

Tue, 06/19/2018 - 00:15

When AGU was founded, nearly 100 years ago, the world was a very different place. However, despite the century’s worth of change between 1919 and today, the ability of Earth and space science to improve our society—and the desire of scientists to provide those benefits to humanity—has remained the same.

We are using the energy of the past to start the next transformational era of Earth and space science.That’s why, as we approach the celebration of our Centennial, we are

using the energy of the past to start the next transformational era of Earth and space science; preparing to connect, inspire, and amplify the voice and contributions of the Earth and space science community for the coming decades; and bringing the global community together with the shared goal of transforming Earth and space science to meet the challenges of today and the opportunities of tomorrow.

Centennial festivities will formally commence at the 2018 AGU Fall Meeting, which will take place 10–14 December in Washington, D. C. Some exciting programs are already under way, and today I’m incredibly proud to be launching “100 Facts and Figures.”

Public Outreach

Centennial campaigns are designed to be replicable so that institutions, labs, and other organizations can create their own versions.From now through the end of December 2019, AGU will be running a series of public outreach campaigns designed to highlight different aspects of Earth and space science, including its diversity, its humanity, and its impact on society. These campaigns are also designed to be replicable so that institutions, labs, and other organizations can create versions of the campaign for their own history and tie them in with AGU’s Centennial celebration. You can see the beginning of the first campaign—“100 Facts and Figures,” an evolving collection of groundbreaking facts and figures that showcase the history, breadth, and success of geoscience research, as well as the scientists whose work has had, and will have, an impact on peoples’ lives—by following the Centennial hashtag, #AGU100, on Twitter and Facebook. I encourage you to share these campaigns with your own networks to help us spread the word about AGU’s Centennial and the importance of Earth and space science.

Science Storytelling

Our science has an immeasurable impact on society. That’s why we are focusing on, and encouraging you to join us in, sharing inspiring stories of breakthrough scientific discoveries, amplifying the message of their impact on our global society.

Using historians, professional story gatherers, and public story-sharing opportunities, the AGU Narratives project will feature an array of individuals telling the diverse and captivating stories of how discoveries and careers were made, where inspiration was found, and how challenges big and small were overcome because of advances in science. As part of this project, we have partnered with StoryCorps, which was present at the 2017 AGU Fall Meeting, to record interviews with a number of AGU members and others. Zoe Courville and Lora Koenig’s story—“Mommy, You Can Do That: Navigating Work–Life Balance Thousands of Miles from Home”—was recorded at the 2017 Fall Meeting, then aired live on National Public Radio. We are also inviting you to use the StoryCorps app to record your own story and upload it to a dedicated AGU Centennial community on the StoryCorps Archive website.

Local Engagement

By communicating your science to society, you are contributing to the Centennial goal of building support for the next 100 years of discoveries and solutions.Equally as important as hearing the voices of scientists is being able to interact with them, which is why we’re also encouraging scientists to consider organizing their own events. AGU’s Centennial is about amplifying the accomplishments and stories of the past 100 years to build support for the next 100 years of discoveries and solutions. To Earth and space scientists the world over I say: By communicating your science to society and inspiring the world to see how Earth and space science can create a more sustainable future for us all, you will be contributing to that important goal. We have an array of tool kits and resources to help prepare you to engage with a wide variety of audiences. If you or your institution are planning an event in celebration of AGU’s Centennial, please let us know, because we would like to share and promote your efforts. And please stay tuned, because in a few weeks we will be announcing a new competitive grant program designed to support such efforts.

A Sample of Our Centennial Programming

AGU’s Thriving Earth Exchange (TEX)—which was the first project conceptualized to commemorate the Centennial—continues to help volunteer scientists and community leaders work together to use science to tackle community issues and advance local priorities related to natural hazards, natural resources, and climate change. By 2019, TEX is aiming to launch 100 partnerships, engage more than 100 AGU members, catalyze 100 shareable solutions, and improve the lives of 10 million people.

Similarly, AGU’s headquarters building in Washington, which began its net zero renovation in early 2017, was envisioned as a living embodiment of our mission. Now that construction is nearing completion, we are excited to have the building help advance the understanding of the importance and impact of Earth and space science by showcasing real-world scientific advancement through innovative, sustainable technology and a series of Earth and space science exhibits. I can’t wait to welcome you into this exciting new space during the 2018 Fall Meeting.

AGU’s journals are home to an exciting Centennial program that is already under way. A set of papers has been commissioned to explore where major research and discovery are needed to address fundamental questions in our understanding of Earth and the solar system. Each paper will review the history of the topic and the current state of knowledge, describing major unanswered questions and challenges and discussing what is needed to achieve the vision or provide solutions over the next decades. AGU will use the collection to showcase our science to policy makers, funders, and the public.

Looking Forward to 2019

This ever evolving and ever-growing celebration is made by and for our community.I am incredibly proud of each and every one of these examples. I’m equally proud to say that they are just one small slice of what AGU and our community have planned in celebration of our Centennial. This ever evolving and ever-growing celebration is made by and for our community, and I’m excited to see what kind of amazing ideas you come up with over the next 18 months.

Be sure to visit the Centennial website for the latest information about events, stories, and new ways that you can participate and lend your voice and energy throughout the year. We have a library of resources to help you plan your own events and be part of the Centennial, as well as inspiring stories from scientists around the world and fascinating information about the history and future of Earth and space science. You can even sign up to become a Centennial volunteer or nominate someone to be interviewed as part of the AGU Narratives project.

Through our Centennial, we step into the next era of scientific discoveries prepared to connect, inspire, and amplify the voices and contributions of this community for decades—even centuries—to come. We look forward to having you join in this journey.

—Chris McEntee (email: agu_execdirector@agu.org), Executive Director/CEO, AGU

The post AGU Launches Its Centennial Celebration appeared first on Eos.

Studying Soil from a New Perspective

Mon, 06/18/2018 - 12:09

The study of soil moisture—defined as the amount of water held between soil particles—can be important for improving agricultural productivity, predicting floods and droughts, understanding Earth’s weather and climate, and more. The amount of moisture in topsoil depends on land surface characteristics (e.g., soil texture) and recent weather conditions, like rainfall. Several prior studies have suggested that small-scale soil moisture patterns—those spanning roughly 3–30 meters—are influenced mainly by land surface characteristics. Large-scale patterns—spanning from 1.5 kilometers to hundreds of kilometers—are influenced mainly by weather conditions.

However, a new study by Dong and Ochsner shows that these relationships are a bit more complex. Past soil moisture studies have been limited by the area of land sensed by their measurement devices, as well as by the average and maximum distances between measurements. Traditional sensors installed in the soil excel at small-scale data collection, whereas satellite-based sensors are powerful tools for observing large-scale soil moisture patterns, but there are major gaps in data and understanding at the mesoscale—between about one kilometer and 100 kilometers.

The researchers applied the cosmic ray neutron method using a mobile neutron sensor called a “rover” to detect fast-moving neutrons just above the ground—neutrons generated when cosmic rays interact with the atmosphere and Earth’s surface. These fast neutrons can be slowed by collisions with hydrogen molecules in the water held by moist soil; the neutrons penetrate anywhere between 15 and 55 centimeters deep. This way, researchers can determine soil moisture of the surrounding landscape on the basis of the number of fast neutrons measured by the rover.

To determine the scale at which soil moisture is most affected by soil texture (as opposed to by rainfall), the researchers deployed the rover along 150 kilometers of unpaved public roads in the Great Plains, where soil ranges from sand to clay. Over roughly a year, the rover collected data on soil moisture patterns at the mesoscale a total of 18 times. The researchers also compiled radar-based rainfall data along the rover path. In all but one instance, they found that soil moisture patterns were more closely tied to variations in sand content than to variations in rainfall.

This study offers a unique, detailed set of soil moisture data. It also shows that although the drivers of soil moisture at the mesoscale varied, it was generally more affected by soil texture than by rainfall. These findings have the potential to refine conceptual models about spatial patterns in soil moisture and to improve future studies in soil science, hydrology, and other fields. (Water Resources Research, https://doi.org/10.1002/2017WR021692, 2018)

—Sarah Witman, Freelance Writer

The post Studying Soil from a New Perspective appeared first on Eos.

Assessing the Future of Space-Based Experiments

Mon, 06/18/2018 - 12:08

Last September, growing interest in a new generation of potential space experiments brought together 65 members of the active-experiments community for a workshop. Los Alamos National Laboratory’s Center for Nonlinear Studies and Center for Space and Earth Science sponsored this workshop, with the goal of assessing past accomplishments, reviewing lessons learned, and developing new ideas for future projects that conduct active experiments in space.

Discussions at the workshop focused on three questions:

What have we learned from past active experiments in space? Why are active experiments not as popular anymore? What is the future of active experiments?

Active space experiments began early in the space age, when little was known about the near-Earth environment.Active space experiments began early in the space age, when little was known about the near-Earth environment. Early experiments focused on very fundamental aspects of the space environment and its interaction with space vehicles. Over a span of several decades, active space-based experiments focused on such things as nuclear explosions, charged-particle beams, heaters, chemical releases, water dumps, plasma plumes, tethers, antennas, and voltage biases.

Attendees at the workshop discussed a number of important accomplishments from this period:

Active experiments stimulated critical work in basic plasma physics (waves, instabilities, structuring, transport) and spacecraft charging. Barium and lithium releases elucidated the physics of plasma cloud dynamics, magnetic field modification, and auroral electric fields. Electron beam experiments demonstrated long-distance beam propagation, beam excitation of plasma waves, and the physics of beam-plasma discharges. Plasma jet experiments demonstrated plasma polarization effects and the propagation of plasma streams across magnetic fields. The Starfish Prime experiment demonstrated the long lifetime (years) of an artificially produced radiation belt. Ionospheric heater experiments stimulated the field of plasma turbulence and parametric instabilities research.

Since those early days, there has been a steep decline in space-based experiments, aside from ionospheric heating experiments. The workshop participants offered several reasons. First, prior experiments have collected most of the more easily obtained data. Second, in the early days of the space age, space flight was less bureaucratic. Third, as more became known about the space environment, exploration with experiments was less needed. Fourth, the community was not proactive enough in communicating their accomplishments. These aspects, combined with budgetary pressures, have restricted the interest in active experiments.

Workshop participants conveyed optimism for the future, however. Many maturing technologies (e.g., metamaterials, compact relativistic accelerators, antennas constructed of superparamagnetic nanoparticles, and cube satellites) could lead to a new era of active experiments. Diagnostics (which are always critical) have improved tremendously. Active experiments have identifiable strengths such as long-range coupling (low to high altitude, magnetosphere to ionosphere), and beam or wave propagation in the space environment can be addressed only with active experiments.

The community needs to identify the most compelling questions that can be answered only with active experiments, and they must demonstrate the relevance of these questions to other scientific areas and to national security.Most significantly, the workshop featured many exciting ideas for future experiments that are now under development. Some of these include Connections Explorer (CONNEX), Demonstration and Space Experiments (DSX), and Space Measurements of a Rocket-Released Turbulence (SMART) and superparamagnetic extremely low frequency/very low frequency (ELF/VLF) antennas.

Going forward, the community needs to identify the most compelling questions that can be answered only with active experiments, and they must demonstrate the relevance of these questions to other scientific areas and to national security. Future workshops, special sessions, and presentations are necessary to engage the broader scientific community and sponsors.

The abstracts and talks presented can be found on the workshop’s website.

—Gian Luca Delzanno (email: delzanno@lanl.gov), T-5 Applied Mathematics and Plasma Physics, Los Alamos National Laboratory, N.M.; and Joseph E. Borovsky, Space Science Institute, Center for Space Plasma Physics, Boulder, Colo.

The post Assessing the Future of Space-Based Experiments appeared first on Eos.

A Closer Look at Turbulent Transport in Gravel Streambeds

Fri, 06/15/2018 - 12:00

Below a river, or any stream, lies a layer of sediment known as the hyporheic zone. There stream water carrying nutrients, pollutants, and other dissolved substances soaks into the streambed and mixes with groundwater. Meanwhile, water in the pores of the hyporheic zone, along with any materials dissolved in it, can rise to join the stream’s flow.

This exchange of dissolved materials across a streambed helps to shape the stream ecosystem, including life in the hyporheic zone itself, as well as downstream ecosystems. In a new study, Roche et al. probe how turbulent streamflow influences this exchange process in streams with coarse, gravel-like beds.

Previous research has shown that turbulent flow of stream water enhances transport across a streambed, but the details of this process have remained unclear because of the difficulty of capturing it in action. For the new study, the researchers addressed this challenge by constructing artificial streams with tightly controlled water flow and sensors built into the streambeds.

The artificial streambeds consisted of coarse spherical beads roughly 4 centimeters in diameter, similar in size to gravel or cobbles. In the first of two series of experiments, the research team used endoscopic particle image velocimetry to visualize the flow of water in the pores between the beads. In this technique, a camera captured the flow patterns of numerous 14-micrometer-diameter glass spheres suspended in the pore water, which were illuminated by a laser.

In the second series of experiments, the scientists installed tiny salt concentration sensors over a full cross section of the 4-centimeter beads. The sensor-equipped beads were all located at the same longitudinal location in the laboratory streambed but at varying depths. Salt tracer was injected into an upstream pore, and the sensors captured its downstream transport and vertical mixing over time.

The researchers conducted both series of experiments under a range of flow conditions. The analysis revealed that patterns of concentration fluctuations closely matched patterns of turbulent velocity fluctuations in the hyporheic zone. Mixing between the salt water and freshwater was strongest in regions where turbulent eddies penetrated into the streambed.

These findings show that turbulent flow of stream water strongly links the water column to the hyporheic zone, supporting theoretical arguments that hyporheic exchange models must include turbulent surface-subsurface interactions in order to accurately simulate the transport of nutrients and pollutants in streams and rivers. Additional research into a wider range of streamflows and streambed geometries could provide further insight. (Water Resources Research, https://doi.org/10.1029/2017WR021992, 2018)

—Sarah Stanley, Freelance Writer

The post A Closer Look at Turbulent Transport in Gravel Streambeds appeared first on Eos.

Life and Death in the Deepest Depths of the Seafloor

Fri, 06/15/2018 - 11:59

In a chapter of Homer’s Odyssey, Odysseus sails to the underworld and uses necromancy—performing rituals to reawaken the dead—to ask the spirits of Hades to help him find his way home. Echoing this tale from antiquity, scientists today are plumbing the depths of the seafloor in search of answers about necromass—the decomposing remains of dead organisms—and its potential to generate life-sustaining energy for microorganisms that have become buried in seafloor sediments, a phenomenon that is poorly understood.

In a new study, Bradley et al. looked at populations of microbes living in the sediments underlying the South Pacific Gyre (SPG), a vast expanse of ocean between Australia and South America. Buried in layers of sediment on the seafloor, these microbes are likely dominated by heterotrophs, organisms that are unable to make their own food and must consume the remains of other living things to survive—just as humans, fellow heterotrophs, rely on plants and animals for food.

SPG is the most oligotrophic ocean region on Earth, which means it is poor in nutrients but rich in dissolved oxygen. With no light and few nutrients available to them, heterotrophs in this environment often feed off of energy generated through oxidizing the remains of dead cells and other organic materials. Like all living things, these organisms use energy at a rate that meets their power demand. However, the extraordinarily low concentrations of organic materials in sediments in SPG severely restrict the breathing rates of the heterotrophic organisms found there, which means that this area of the seafloor contains comparatively more dissolved oxygen than elsewhere.

By analyzing sediment samples from SPG and applying a mathematical model, the team found that the oxidation of necromass produced within the upper 3 meters of sediment produces just 0.02% of the total power demand of the microbial community in that layer. This number continued to decrease as the scientists probed deeper into the sediment, where the microbial populations are much sparser.

On the other hand, the team found that the oxidization of allochthonous material (ancient organic material that has been buried in place in the sediments), as well as hydrogen, seems to have a much greater impact on the supply of power required by the microbial community.

The team found similar results on a global scale: In layers of sediment less than 10,000 years old, oxidized necromass met 13% or less of the microbial communities’ power needs. In older layers, its contribution was inconsequential.

This study sheds light on one of nature’s murkiest environments—deep below the seafloor—and identifies key sources of power that support life and entire ecosystems in these globally expansive but ultraextreme habitats. (Journal of Geophysical Research: Biogeosciences, https://doi.org/10.1002/2017JG004186, 2018)

—Sarah Witman, Freelance Writer

The post Life and Death in the Deepest Depths of the Seafloor appeared first on Eos.

Rare Glacial River Drains Potentially Harmful Lakes

Thu, 06/14/2018 - 12:38

Intensely blue, supraglacial lakes like those dotting Petermann glacier in northwestern Greenland are beautiful but can endanger the floating tongues of ice on which they form. As such lakes fill and then drain rapidly through cracks, the resulting stresses caused by shifting loads can weaken and even fracture ice. That’s what happened to Antarctica’s Larsen B ice shelf in 2002, and it’s bad news: Ice shelves as well as the floating tongues of tidewater glaciers act like buttresses, holding glaciers back from flowing into the ocean and contributing to sea level rise.

Now the first study of the effects of supraglacial lakes on the periphery of the Greenland ice sheet has found that some of Petermann glacier’s lakes are probably draining in a gradual and innocuous way that spares the floating tongue of ice from a similar fate. An analysis of satellite images indicates that the meltwater empties into a river on the glacier’s surface that funnels the runoff into Petermann Fjord. Such glacial rivers might act as a harmless export mechanism for supraglacial lake water, the scientists suggest. However, these rivers are rare.

The Blue River on Petermann glacier in Greenland. Credit: Dave Walsh/VWPics/Alamy Stock Photo Half as Much Water

Scientists working in Antarctica had previously shown that supraglacial lakes can destabilize ice shelves. However, such effects of these lakes in Greenland, whose ice sheet is melting much faster than Antarctica’s, have not drawn similar attention.

To study Petermann glacier, Alison Banwell, a glaciologist at the University of Colorado Boulder, and her colleagues turned to observations collected by the Landsat 8 satellite from 2014 to 2016. The team focused on the glacier’s floating terminus in Petermann Fjord.

Scrutinizing the high-resolution imagery, Banwell and her collaborators tabulated lakes, measured their areas, and estimated their depths across three melt seasons. Banwell was at the University of Cambridge in the United Kingdom at the time this study was conducted.

The researchers found that Petermann’s supraglacial lakes tended to form in June and peak in number, area, and volume around the end of the month before dissipating by August. Even when the lakes were most numerous, however, they covered less than 3% of the glacier’s tongue, the team reported this month in Annals of Glaciology. By contrast, more than 5% of Antarctica’s Larsen B ice shelf was covered in supraglacial lakes when it collapsed 16 years ago.

“We’d be a bit worried if Petermann had as much water on it as Larsen B.”Banwell and her colleagues were relieved to find that supraglacial lakes covered less of Petermann glacier. “The fact that there’s half as much water on Petermann is a good sign,” said Banwell. “We’d be a bit worried if Petermann had as much water on it as Larsen B.”

Supraglacial lakes can destabilize ice shelves in several ways, studies in Antarctica have shown. Water is less effective at reflecting sunlight than ice is, so supraglacial lakes absorb more heat and cause the surrounding ice to melt more quickly. Furthermore, water can percolate into fractures in the ice, which can then expand. “The force of the water helps to propagate that crack downward,” explained Banwell. When supraglacial lakes drain rapidly—through a fracture, for example—millions of kilograms of water are quickly removed, which flexes and weakens the ice shelves.

A Glacial Relief Valve

Banwell and her colleagues found that Petermann glacier’s lakes sometimes coalesced with neighboring lakes, forming larger bodies. They also occasionally drained away in just a few hours or days. Sometimes, however, the lakes changed in volume only gradually, leading Banwell and her collaborators to presume that the water was draining away not in a gush but in a trickle. The researchers had to look only at the center of the glacier’s tongue to find the likely outlet.

Petermann glacier’s “Blue River” is a several-kilometer-long turquoise waterway bisecting the glacier’s tongue and plunging into Petermann Fjord. Lake water is likely trickling into the Blue River, slowly releasing pressure on the ice shelf and minimizing its flexure, the team concluded.

“We don’t think Petermann is currently prone to collapse.”“The runoff is important,” said Richard Alley, a geoscientist at Pennsylvania State University in University Park who was not involved in the research. “We need to understand the occurrence of lakes and how they behave,” he added.

Fortunately, the Blue River appears to be a recurring feature of Petermann glacier. It was visible every melt season in the team’s satellite images, evidently reforming each summer after the winter snowfall. Thanks to this glacial relief valve, said Banwell, “we don’t think Petermann is currently prone to collapse.”

Nonetheless, other glaciers in the Arctic may be more vulnerable, Banwell said. “This is the only example [of a glacial river] in Greenland.”

—Katherine Kornei (email: hobbies4kk@gmail.com; @katherinekornei), Freelance Science Journalist

The post Rare Glacial River Drains Potentially Harmful Lakes appeared first on Eos.

Volcano Music Could Help Scientists Monitor Eruptions

Thu, 06/14/2018 - 12:25

A volcano in Ecuador with a deep cylindrical crater might be the largest musical instrument on Earth, producing unique sounds scientists could use to monitor its activity.

New infrasound recordings of Cotopaxi volcano in central Ecuador show that after a sequence of eruptions in 2015, the volcano’s crater changed shape. The deep narrow crater forced air to reverberate against the crater walls when the volcano rumbled. This created sound waves like those made by a pipe organ, where pressurized air is forced through metal pipes.

In this video, listen to the unique sounds of Cotopaxi made audible by modulation with white noise.

“It’s the largest organ pipe you’ve ever come across,” said Jeff Johnson, a volcanologist at Boise State University in Idaho and lead author of a new study detailing the findings in Geophysical Research Letters, a journal of the American Geophysical Union.

The new findings show the geometry of a volcano’s crater has a major impact on the sounds a volcano can produce. Understanding each volcano’s unique “voiceprint” can help scientists better monitor these natural hazards and alert scientists to changes going on inside the volcano that could signal an impending eruption, according to the study authors.

“Understanding how each volcano speaks is vital to understanding what’s going on,” Johnson said. “Once you realize how a volcano sounds, if there are changes to that sound, that leads us to think there are changes going on in the crater, and that causes us to pay attention.”

Instituto Geofisico researchers maintain a monitoring station at Cotopaxi volcano in central Ecuador. A new study shows Cotopaxi produces unique sounds scientists could use to monitor the volcano and its hazards. Credit: Silvia Vallejo Vargas/Insituto Geofisico of the Escuela Politecnica Nacional (Quito, Ecuador)

The ongoing eruption of Kilauea in Hawaii could be a proving ground for studying how changes to a crater’s shape influence the sounds it makes, according to Johnson. The lava lake at Kilauea’s summit drained as the magma supplying it flowed downward, which should change the tones of the infrasounds emitted by the crater.

Listening to Kilauea’s infrasound could help scientists monitor the magma depth from afar and forecast its potential eruptive hazards, according to David Fee, a volcanologist at the University of Alaska Fairbanks who was not connected to the new study. When magma levels at Kilauea’s summit drop, the magma can heat groundwater and cause explosive eruptions, which is believed to have happened at Kilauea over the past several weeks. This can change the infrasound emitted by the volcano.

“It’s really important for scientists to know how deep crater is, if the magma level is at the same depth and if it’s interacting with the water table, which can create a significant hazard,” Fee said.

Detecting a New Kind of Sound

Cotopaxi was dormant for most of the 20th century, but it erupted several times in August of 2015. The eruptions spewed ash and gas into the air, endangering the more than 300,000 people who live near the volcano. A massive eruption could melt Cotopaxi’s immense snowcap, which would trigger massive floods and mudflows that could reach nearby cities and towns.

The 2015 eruptions were relatively minor but triggered an explosion that caused the crater floor to drop out of sight. That was when Ecuadorian researchers monitoring the volcano noticed weird sounds coming from the crater. The frequency of the sound waves was too low for humans to hear, but they were recorded by the scientists’ instruments. The researchers dubbed the sounds tornillos, the Spanish word for screws, because the sound waves looked like screw threads. They oscillated back and forth for about 90 seconds, getting smaller each time, before fading into the background.

Johnson likens it to the “old Western bar door” that once opened, swings back and forth several times before coming to rest. But because of the crater’s size – it’s more than 100 meters (300 feet) wide and about 300 meters (1,000 feet) deep – it takes five seconds for the sound waves to go through one full oscillation.

Scientists dubbed Cotopaxi’s sounds ‘tornillos’ because the sound waves looked like screw threads. Credit: Jeff Johnson

“It’s like opening a bar door that goes back and forth for a minute and a half,” Johnson said. “It’s a beautiful signal and amazing that the natural world is able to produce this type of oscillation.”

Pipe organ players create sounds with similar characteristics by using a keyboard to force air through pipes of differing lengths. This is the first time volcanologists have recorded sounds of such low frequency and with this dramatic reverberation coming from a volcano, according to Johnson.

The crater produced tornillo sounds about once a day for the first half of 2016, before they stopped. Johnson and his colleagues are unsure exactly what caused the sounds, but they know it had something to do with the volcano’s activity and not just wind blowing across the top of the crater. Each tornillo was associated with gas coming out of the vent, Johnson said.

The researchers suspect one of two things could have excited the volcano into producing the tornillos. Part of the crater floor could have been collapsing, as can happen when magma moves under a volcano, or an explosion was taking place at the bottom of the crater. Explosions are common in open-vent craters like Cotopaxi, where gas accumulates until it reaches a pressure high enough to explode.

The post Volcano Music Could Help Scientists Monitor Eruptions appeared first on Eos.

Going with the Flow in Outer Space

Thu, 06/14/2018 - 12:24

Electric currents are a net flow in charge from one location to another. Nearly everyone in the world comes into regular contact with electric currents, such as when an electrical plug is inserted into a live socket. Near-Earth space is a place to find electric currents too, from the upper atmosphere 100 kilometers above our heads, to the solar wind tens of thousands of kilometers from the planet. In an article recently published in Reviews of Geophysics, Ganushkina et al [2018] describe the structure and dynamics of the main electric current systems in near-Earth space. Here, one of the authors gives an overview of the nature of these currents and describes how our knowledge of them has improved and could be further advanced.

What electric currents flow in near-Earth space?

There are various different types of electric current flowing around the Earth. Geospace, the region of near-Earth controlled by Earth’s internally-generated magnetic field, is a fairly hard vacuum compared to the air we breathe. It still has some particles in it, though; specifically, it contains plasma, a rarified electrically charged gas. Plasma distribution is by no means uniform and there are sharp boundaries separating types of plasmas with totally different characteristics. Electric currents tend to form at these boundaries, and our review summaries the typical structure and motion of the major current systems.

The Sun constantly emits charged particles called solar wind. The Earth, with its magnetic field, is an obstacle in the flow of the solar wind. The kinetic pressure of the solar wind compresses the terrestrial magnetic field on the dayside, in front of the Earth, and a current flows across the magnetopause, a surface boundary separating Earth’s field and the interplanetary magnetic field (IMF). On the nightside, behind Earth, the magnetic field is stretched and this is where the magnetotail current exists.

A typical change in the current systems around Earth (Chapman-Ferraro magnetopause current, green, and Region 1 field-aligned current, red) occurs when the upstream conditions in the supersonic “solar wind” compresses the dayside region of near-Earth space. On the boundary between the solar wind and geospace, there is a shift of dominance from one current system to another. Credit: Ganushkina et al., 2018, Figure 7

Near-Earth space is filled with ions and electrons which come from the solar wind and the terrestrial ionosphere. Opposite drifts of these differently-charged particles result in a net charge transport with the ring current flowing around the Earth. There exist also currents flowing along magnetic field lines, called field-aligned currents, which are mainly carried by electrons and connect the magnetospheric currents with ionospheric currents.

Because of the structure of the magnetic field in the dayside region of near-Earth space, a special current system called the cut ring current (yellow) exists, where the current flow splits open for a portion of its path around the planet. Eastward (light brown) and westward (light blue) parts of the ring current flow closer to Earth. Credit: Ganushkina et al., 2018, Figure 5

Some of these current systems have unusual structure, like the cut ring current. The currents in the inner magnetosphere are often centered around the local minimum of the magnetic field, in the magnetic equatorial plane around Earth.

In the dayside region, close to the boundary with the solar wind, the magnetic field minimum shifts away from the equatorial plane, splitting the current for a segment of its path around the planet.

How do these currents vary over space and time?

As the Earth’s magnetosphere responds to changes in solar activity, the main magnetospheric current systems can undergo dramatic changes with new transient current systems being generated. The size of the Earth’s magnetosphere is huge: the distance from the Earth’s surface to its end on the side which looks to the Sun is about 40,000 miles. Away from the Sun, the magnetosphere stretches very far into space, hundreds of thousands of kilometers beyond the Moon’s orbit. Currents vary over these very large distances and by orders of magnitude on the time scales from minutes to hours.

One typical shift in currents is when the solar wind increases in intensity and squeezes the dayside region of near-Earth space. This squeeze systematically changes which current system dominates the boundary region between the solar wind and geospace, which has consequences for the convective motion within near-Earth space.

What do the characteristics of these currents reveal about the magnetosphere?

Magnetospheric currents generated as a result of distortion of the terrestrial internal magnetic field due to the interaction with the solar wind and formation of the magnetosphere are important constituents of the dynamics of plasma around the Earth. They transport charge, mass, momentum and energy, and they themselves generate magnetic fields which distort significantly the pre-existing fields.

Understanding the relative strength and location of each electric current system is vital to accurately predicting the variations of the magnetic field related to them and the associated space weather effects.

Understanding the relative strength and location of each electric current system is vital to accurately predicting the variations of the magnetic field related to them and the associated space weather effectsOne example is alterations of the drift paths of the relativistic electrons changing the location and intensity of radiation belts, which are the major source of damaging space weather effects on satellites.

Another example is variations in the strength of Geomagnetically Induced Currents responsible for disruptions of the transmission system operations with voltage collapse or damage to transformers on the ground.

These effects are controlled by the magnetospheric and ionospheric currents and by the Earth’s conductivity.

What have been some of the most significant research advances in this field?

It is very difficult to distinguish current systems from single-point spacecraft measurements, but a constellation of satellites can provide the necessary observations to identify and classify local current density values into large-scale current systems. A specific method to obtain the current densities called the curlometer technique was successfully used in the Cluster mission with four identical spacecraft launched on similar elliptical polar orbits. The tetrahedron formed by the four spacecraft was used to study various plasma structures with characteristic sizes ranging between tens of kilometers and a few Earth radii (Earth’s radius RE is equal to 6371 km).

Another significant advancement is the advent of realistic global magnetosphere models that are capable of reproducing the structure and dynamics of the magnetosphere, including these current systems. While there is still a long way to go to fully capture all of the physics happening in near-Earth space, these high-end computing tools have led to many insights in how charged particles flow through outer space and how they interact with Earth’s magnetic field.

What are some of the unresolved questions where additional research, data or modeling is needed?

As noted above, the identification of specific current systems from in situ spacecraft measurements is extremely difficult. Taken alone, a current density value at a single point in the magnetosphere cannot be identified as part of a particular current system. Current density values at multiple locations must be synthesized into a regional or global scenario of possible current closure. Even this may not produce a unique current system pattern, and numerical models can help connect the localized current density values into a synoptic mapping of current flow through geospace.

There are several multi-spacecraft missions with different orbits that can provide the right distribution of measurements for global current system analysisAt the same time, there are several multi-spacecraft missions with different orbits that can provide the right distribution of measurements for global current system analysis: the Cluster mission with four spacecraft regularly passing through the inner, outer and high-latitude magnetosphere, the THEMIS mission originally with five spacecraft in a highly elliptical low-inclination orbit, the two Van Allen Probes in the inner magnetosphere, and the four MMS spacecraft with an apogee of 12 RE, eventually moving to 25 RE.

The low-Earth orbiting satellites, such as AMPERE and Defense Meteorological Satellite Program (DMSP) spacecraft can provide a highly complementary data set to the magnetospheric missions, allowing for analysis of the current system connections and interplay between the ionosphere and magnetosphere.

—Natalia Yu Ganushkina, Department of Climate and Space Sciences and Engineering, University of Michigan and Finnish Meteorological Institute; email: ganuna@umich.edu

The post Going with the Flow in Outer Space appeared first on Eos.

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer