EOS

Syndicate content Eos
Science News by AGU
Updated: 1 day 1 hour ago

Visualizing Science: How Color Determines What We See

Thu, 05/21/2020 - 12:23

Color strongly influences the way we perceive information, especially when that information is dense, multidimensional, and nuanced—as is often the case in scientific data sets. Choosing colors to visually represent data can thus be hugely important in interpreting and presenting scientific results accurately and effectively.

“Language is inherently biased, but through visualization, we can let the data speak for [themselves],” said Phillip Wolfram, an Earth system modeler and computational fluid dynamicist at Los Alamos National Laboratory. At Los Alamos, data visualizations are as ubiquitous as the sagebrush that embroiders the nearby desert. Every day, expert teams wrangle, render, and color encode swaths of data for interpretation with the lab’s Earth and computer scientists. Choosing colors to represent various properties of the data, a step that ranges from an iterative, responsive process to a hasty afterthought, is the final barrier between painstaking data collection and well-anticipated analysis and discovery.

“The same colormap applied to a diverse array of data gets monotonous and confusing.”Most visualization software comes equipped with colormaps—a selection of standard color-encoding gradients that researchers can, in a matter of seconds, apply to display and evaluate their data. But not all data visualizations are created equal, and despite a proliferation of literature denouncing standard maps like the traditional rainbow colormap [e.g., Borland and Taylor, 2007], they pervade visualizations from basic bar graphs to complex depictions of biogeochemical data.

At AGU’s Fall Meeting 2019 in San Francisco, Calif., this past December, row upon row of posters in the convention center’s vast main hall featured the same bright, standard colormaps adorning visualizations of temperature scales, chlorophyll concentrations, land elevations, and a host of other data.

“The same colormap applied to a diverse array of data gets monotonous and confusing,” said Rick Saltus, a senior research scientist with the Cooperative Institute for Research in Environmental Sciences (CIRES) at the University of Colorado Boulder. “You’re trying to communicate both effectively and efficiently, and that’s impeded if the viewer is presented with a variety of concepts, all illustrated using identical color mappings.”

Color researchers and visualization experts around the world are working to change this status quo. A number of groups are developing new tools to help scientists image increasingly complex data sets more accurately and intuitively, and with higher fidelity, using context as a guide to ensure an appropriate balance of hue, luminance, and saturation.

How We See Color

More than a century ago, Albert H. Munsell built upon the work of Isaac Newton and Johann Wolfgang von Goethe to compose our modern concept of color “mapping” [Munsell, 2015]. Munsell’s research produced the first perceptually ordered color space—a three-dimensional plot in which the axes represent hue (color), value (lightness or darkness), and chroma (intensity of color) [Moreland, 2016].

Around the same time, in the early 1900s, Hermann Grassmann’s theory of linear algebra decrypted abstract math, revealing the origami-like properties of higher dimensions. Grassmann thereby created the concept of vector space, allowing for the approximate calculation of perceived color within a defined area [Grassmann, 1844]. The study of color no longer depended on approximation, but could rather be coded numerically, plotted along a parabola. This level of accuracy is necessary because color perception is a subjective experience dependent on light, simultaneous contrast (the phenomenon of juxtaposed colors affecting each color’s appearance; see Figure 1), and rod and cone photoreceptors within the viewer’s eyes [Albers, 2006; Itten, 1970].

Fig. 1. Detail of topography between the Filchner-Ronne and Ross Ice Shelves in West Antarctica. This image illustrates the effects of simultaneous contrast within the traditional rainbow colormap (left), increased detail in a desaturated version of the traditional rainbow (center), and an analogous color palette for greater aesthetic quality as well as discriminatory power (right). Credit: Graphic created by Francesca Samsel with data processed using E3SM In the Eye of the Beholder

Contemporary “color spaces” (Figure 2) can be divided into two categories: absolute and nonabsolute. Absolute color spaces define color in terms of human perception. More familiar nonabsolute color spaces such as red–green–blue (RGB) and cyan–magenta–yellow–black (CMYK) define color based on self-contained models that rely on either input devices, like a camera, or output devices, like a monitor or printer.

The popular rainbow map was defined within a nonabsolute color space—its gradient spans many highly saturated hues without consideration for their mathematical separation within an absolute (perceptual) space. This arrangement creates extremely high contrast between hues, visually demarcating different values in a data set and thereby propelling the rainbow map to default status among most scientists.

Fig. 2. Two types of color space established by the International Commission on Illumination (CIE) in 1931: the nonabsolute RGB color space illustrated using a cube (left), and the CIELAB color space (right), one of the most widely used absolute color spaces. The full parabola of CIELAB color space illustrates the full visible spectrum of hues typically discernable by the human eye. Within this parabola, the white triangle encloses all hues available in Adobe RGB color space. (CIELAB is a three-dimensional space shown in two dimensions here for simplicity.) Credit: SharkD, CC BY-SA 4.0 (left); public domain (right)

In recent years, however, researchers have challenged the rainbow’s status and common usage, which proliferated without full consideration of its influence on data representation. Rainbow mapping can wash out oceans of data in glaring neon green, creating false artifacts, color interference, and attention bias (see, e.g., the left-hand graph in the figure at the beginning of this article) [Borland and Taylor, 2007; Liu and Heer, 2018; Ware et al., 2019, 2017]. Perceptual scientists have attempted to remedy this issue through “perceptually equalized colormaps”—maps made with uniformly spaced values in absolute color space—but these maps, along with the rainbow colormap and its cousins, were all created independently of the data they are used to represent. This means they were made for general use as opposed to considering each data set’s unique properties and needs. In an age of rapidly growing and increasingly complex data, many visualization experts agree that their utility is limited.

Balancing Familiarity with Clarity

Scientists and visualization professionals attempting to heed the latest color mapping research struggle to break away from standard, data-independent maps like the traditional rainbow, viridis, the cool–warm divergent, Jet, and Turbo (Google’s increasingly popular next-generation rainbow), however. Impeding its adoption are scarcities of convenient and user-friendly color mapping resources, of available expertise and guidance, of precedents within the community, and of time available to researchers to familiarize themselves with new color conventions [Moreland, 2016].

“When people approach a visualization, they have expectations of how visual features will map onto concepts,” said Karen Schloss, a psychologist at the University of Wisconsin–Madison and the Wisconsin Institute for Discovery. Schloss and her team are working to tackle these implementation issues and understand trade-offs between deeply ingrained, communal familiarity and the next generation of color tools.

“We refer to these expectations as inferred mappings,” she said. “If someone has been working in a particular field their whole [life], they have this inferred mapping for colors and the values they represent. We are trying to understand these biases while also being careful about recommending that these experts just abandon their conventions completely. We need to find a balance.”

In search of that balance, visualization experts are focusing their efforts on understanding what scientists need from their data and on how to address those needs without compromising the information contained in the data themselves.

What Do Scientists Need?

Francesca Samsel, a research scientist at the Texas Advanced Computing Center (TACC) at the University of Texas at Austin, and her team describe scientists’ needs with respect to data visualization using three categories—feature identification, exploration, and communication—as well as subcategories of pinpointing outliers and determining relationships. While interpreting a large data set, scientists are usually looking for specific features (e.g., the direction of flow of ocean currents or water temperatures in certain locations) within known data ranges; or they are exploring the data holistically to make general observations; or they are looking to communicate specific properties of the data to colleagues, peers, or the public. Occasionally, a researcher may be interested only in outliers, or in the way one variable affects another—for example, how does water temperature change when two currents meet?

Visualization is how scientists interact with the quantitative outcomes of their research and how they make data-based arguments, said Wolfram. He regularly uses visualizations, including overhead plots, representing the view from above (e.g., from satellites) to explore Earth systems and climate data in his work at Los Alamos. “What I’m looking for is tightly coupled to the science question,” he said. “I’m typically trying to understand geospatial relationships, particularly in overhead plot data, for surface features like [ocean] eddies.” Advanced color mapping tools help prevent significant losses of feature detail in the data Wolfram analyzes, unlike standard maps like Jet (Figure 3).

Fig. 3. Comparison of the same data portrayed with a desaturated rainbow colormap (top), which is often used to increase visual detail, and with a colormap designed to provide detail in outlying data ranges (bottom). The specialized colormap provides detailed information about the kinetic energy within eddies in the Agulhas Current, a major current in the Indian Ocean, whereas the rainbow colormap simply identifies the eddies. Credit: Graphic created by Francesca Samsel with data processed and provided by M. Petersen, LANL, using MPAS-Ocean

Samsel and others argue that non-data-dependent color mapping strategies can in fact perpetuate bias if the hues are not arranged in a familiar order (e.g., rainbow order), if the luminance is not specifically accounted for, or if multiple gradients within a map are not arranged for a specific data set [Borland and Taylor, 2007]. The importance of visualization throughout the research process necessitates high-level tools that maximize information and minimize obstruction caused by any of these color-related issues. Researchers Samsel, Schloss, and Danielle Szafir, an assistant professor and director of the VisuaLab at the University of Colorado Boulder, have taken concrete steps to create such tools, maintaining the goal of intuitive operation.

A New Set of Color Tools

Samsel, who was trained and worked for 25 years as an artist before pivoting to visualization, uses her knowledge of color theory in tackling the perceptual challenges of crafting colormaps. “We’ve discovered in the course of our research that presenting scientists with perceptually equalized colormaps is not always the most beneficial solution” for providing the kind of resolution scientists need within their data, Samsel said. There are nuance and complexity in color interaction that affect a viewer’s associative response and how they derive the relative importance of different features within the data, she explained.

Samsel’s research spurred the creation of ColorMoves [Samsel et al., 2018], an applied tool for interactively fine-tuning colormaps to fit the needs of different data sets. The online interface for the tool provides sets of maps focused on achieving increased discriminative power [Ware et al., 2019] while reflecting the palette of the natural world—something called “semantic association.” For example, many people associate blue with ocean data and green with land data.

Beginning in the 1990s, perceptual researchers established [Ware et al., 2019; Rogowitz, 2013] that the human eye displays greater sensitivity to differences in luminance (perceived brightness) than to hue when distinguishing patterns within densely packed data points—findings that Samsel and her colleagues have reaffirmed, in part through color theory. Using this information in concert with gradations of hue and saturation, Samsel created linear (one hue gradient), divergent (two hue gradients that meet in the middle), and what she calls “wave” colormaps—maps that cycle through luminance distribution across many hues (Figure 4). “This creates a greater density of contrast throughout the map, which resolves many more features on continuous data,” said Samsel.

Fig. 4. In a “wave” colormap like this one created by Francesca Samsel, selective saturation is used to isolate and “foreground” specific data ranges so these data are easier to follow over time. In the colormap, desaturated colors surround a saturated section to focus attention while providing context. Credit: Graphic created by Francesca Samsel with data processed and provided by M. Petersen, LANL, using MPAS-Ocean

The ColorMoves interface enables users to drop multiple colormaps onto their data and adjust the result interactively, seeing the changes on their data in real time and allocating hue and contrast wherever appropriate (Figure 5). “It wasn’t necessarily intended as a tool for data exploration, but scientists have identified that as a priority” and have been using it for exploration, said Samsel.

Fig. 5. The colormapping tool ColorMoves enables users to drop multiple colormaps onto their data, adjust the resulting image interactively, and see the changes on their data in real time. Within this interface are (a) a selection menu of color scales and discrete colors; (b) a viewing window; (c) controls including a colormap splitting tools, color scale inserter, and undo and redo buttons; and (d) a data distribution histogram. Click image for larger version.

Schloss, who runs the Visual Reasoning Lab at the University of Wisconsin–Madison, focuses on making visual communication more effective and efficient through the study of color cognition, targeting the trade-offs in color mapping between high contrast and aestheticism. “People see customization as a huge asset for creating visualizations,” she said, “and I think making that capability easily accessible may encourage people to take more care in how they are color encoding and presenting their data.”

As part of an effort to optimize colormaps for visualizations, Schloss and her colleagues created a tool called Colorgorical. The tool features a series of adjustable sliders for customizing a palette based on perceptual distance between hues, the differences between hue names, how close together the hues are on the color wheel, and the specificity of hue names (e.g., peacock or sapphire versus blue) [Heer and Stone, 2012]. Users can also select a hue and luminance range within which the custom palette should fall, and they can construct a color palette around a “seed color” (Figure 6).

Fig. 6. The online tool Colorgorical allows users to choose the number of discrete colors they need to depict categorical data along with other many other customizations. The tool creates a palette based on the user’s specifications. These palettes can be downloaded or copied in any of the color spaces provided (top). Designer in a Box

Schloss, Samsel, and Szafir agree that future research in this area lies in producing automatically generated colormaps that are based on the custom work of designers and that are ready to integrate with the visualization software scientists are already using.

“How can we capture designer practices without asking them to distill their entire field down to an extremely simplified, and therefore incomplete, set of rules?”Szafir and her team at the ATLAS Institute’s Department of Computer Science are already on the path to making this vision a reality. “We started thinking about how to meet the user where they are,” Szafir said.

The attention to detail and creativity that individual designers can offer are unfortunately not very scalable. So Szafir’s team wondered whether they could create a product that was effectively a “designer in a box.” “How can we capture designer practices without asking them to distill their entire field down to an extremely simplified, and therefore incomplete, set of rules?” Szafir asked.

Szafir and her colleagues constructed machine learning models based on 222 expert-designed color gradients and used the results to automatically generate maps that mimic designer practices. The final maps “support accurate and aesthetically pleasing visualizations at least as well as designer [maps] and…outperform conventional mathematical approaches” to colormap automation, Szafir and colleagues wrote in a recent study [Smart et al., 2020].

Despite her team’s progress, complete automation has drawbacks, Szafir said. An algorithm needs a set of data to “train” on—in this context, a batch of colormaps with similar curves in 3-D color space. The algorithm derives a set of rules based on this group of maps, then generates new maps based on those rules. However, “a lot of what comes out of the artistic community that is truly expressive, creative, and engaging comes from breaking these conventions,” she said. “An artist knows where and when and how to break those rules, and we don’t know if we can produce that level of comparable variation [in a program] successfully.”

Both Samsel and Schloss are seeking avenues to merge their diverse expertise with Szafir in the near future, continuing to structure their respective work around the data visualization needs of scientists. “We see a future of using new tools to measure the spatial density of data, creating an automation process that interpolates, aligns, and allocates contrast within a colormap to match [those] data,” said Samsel. Such a program would concentrate variance in hue and luminance where the data are densest, allowing a scientist to visualize nuance and detail where they count.

Conveying an Accurate Message

Scientific visualizations play multiple roles of helping people quantify, interpret, evaluate, and communicate information. Their importance in the exploration and discovery of data is immense and is growing with advances in computational power, yet visualization’s single most effective encoder—color—remains vastly understudied. Although the macroscale effects of such neglect on public perception of issues like climate change are yet unknown, many groups are beginning to devote greater resources to designing their visualizations with the public in mind.

“Data visualization is a language, and, foundationally, you’re trying to tell a story. Color is a huge part of this.”Ed Hawkins, a lead author of the upcoming sixth Intergovernmental Panel on Climate Change (IPCC) report and creator of the now-viral “Warming Stripes” graphic, said the IPCC pays multiple graphic designers to transform complex visualizations into simplified graphics. “We have to be able to reach a very broad audience,” Hawkins added. “Not just for general communication, but also to inform policy decisions and to help people respond to risks that threaten their way of life.” Hawkins and his team spend “a lot of time” focusing on color blindness issues in addition to readability and semantic understanding of color.

Keeping in mind the difference in perception between those publishing reports and those reading them is imperative, according to Lyn Bartram, a professor in the School of Interactive Art and Technology at Simon Fraser University and director of the Vancouver Institute for Visual Analytics. “Rather than engaging people in the experience of understanding big data, we just sort of throw the facts out there and wash our hands of the affair,” she said. “Data visualization is a language, and, foundationally, you’re trying to tell a story. Color is a huge part of this.”

If researchers do not behave as if they are in conversation with their audience, Bartram said, their work will have little impact. “The democratization of data visualization means that it has become media; it is no longer just a means to an end for scientists,” she said. “Visualization has become so much more than just a tool. It is now a part of our conversations and decision-making as a society at large.”

The Long-Lasting Legacy of Deep-Sea Mining

Thu, 05/21/2020 - 12:17

In the 26-year-old tracks, microbial activity was reduced fourfold. Mining for rare metals can involve a good amount of detective work. It can take time and skill to find the most abundant sources. But in the deep ocean, metallic deposits sit atop the seafloor in full view—a tantalizing sight for those interested in harvesting polymetallic nodules.

Scooping up nodules requires mechanical skimming of the ocean floor, which disrupts the upper centimeters of sediment. This disturbance has rippling effects on sea life, but the severity and duration of ecological impact have remained largely unknown.

But in a new study, researchers dove deep to look at mining’s impact on microbial communities. They found that decades later, benthic microorganisms haven’t recovered, and researchers estimate it would take at least 50 years for some ecosystem functions to return to predisturbed conditions.

Disturbing the Peace

In 1989, scientists began a deep-sea mining experiment called the Disturbance and Recolonization experiment (DISCOL) in the Peru Basin of the South Pacific Ocean. The study simulated nodule mining by dragging a plough-harrow device over an 11-square-kilometer area, cutting and reworking the upper 10 to 15 centimeters of seafloor sediments. .

The impact of disturbing the seafloor for mining activities remains decades after the initial disturbance. Credit: ROV-Team/GEOMAR

. Since the start of DISCOL, scientists have been visiting the basin to monitor the effects of mining on benthic life. In a new study in Science Advances, researchers focused on the smaller communities of organisms found at depth.

“We tried to answer how long a disturbance of the deep-sea floor ecosystem by simulated nodule mining could affect benthic microorganisms and their role in the ecosystem,” said Tobias Vonnahme, a marine biologist at the Arctic University of Norway, and Antje Boetius, director of the Alfred Wegener Institute. (The researchers responded to email requests from Eos as a group and will be referred to as “the team.”)

Monitoring Microorganisms

At the DISCOL site, the team deployed their cameras and sampling equipment and got their first look at the seafloor. “First, we saw undisturbed seafloor covered by manganese nodules and larger animals, such as octopuses, fish, and colorful sea cucumbers,” they said. But the troughs soon came into view—even 26 years after the DISCOL experiment, the plough tracks were pronounced.

The researchers took sediment cores of the seafloor both within disturbed area and in fresh, 5-day-old tracks. “Thanks to novel robotic technologies, we were able to quantify the long-lasting impacts on microbial diversity and function in relation to seafloor integrity,” they noted.

After analyzing the cores from the seafloor, the team found that in the 26-year-old tracks, microbial activity was reduced fourfold. Additionally, the mass of microorganisms was reduced by about 30% in the top 1 centimeter of disturbed sediment. In fresh tracks, the microbes were reduced by about half. They also found lower organic matter turnover, reduced nitrogen cycling, and lower microbial growth rates in disturbed areas.

“Benthic life—including microorganisms, which carry essential functions such as nutrient recycling—need more than 26 years to recover from the loss of seafloor integrity,” said the team.

“The self-healing of the ecosystem is very limited.”They add that on the basis of the microbial activities they observed in the most disturbed areas, it would take at least 50 years for some functions to return. “Considering the low sedimentation rates, [full] recovery will take much longer,” they noted.

“The self-healing of the ecosystem is very limited,” they concluded.

This is a novel study, said Maria Pachiadaki, an assistant scientist at the Woods Hole Oceanographic Institution who was not part of the study. She added that it’s the first time researchers have focused on the deep-sea mining impacts on the microbial community.

Pachiadaki and her colleagues previously hypothesized that these types of “disturbances would also impact microbes, plus ecosystem functions, because microbes mediate the entire biogeochemistry of their environment.” She said this study confirms their suspicions and gives a long-term record of what happens after a mining disturbance.

“Life as we know it starts with microbes,” said Pachiadaki. She said one striking finding of the study was that the carbon fixation rates—or how inorganic carbon is transformed to organic carbon—decreased substantially in disturbed sites.

Pachiadaki noted another substantial finding was the identity of the microorganisms in the benthic sediment. Specifically, the microbial communities were enriched with nitrifiers. “It’s a group of organisms that make nitrogen bioavailable,” she explained. “Nitrogen is one of the essential micronutrients…and the limiting factor of productivity.”

The Future of Deep-Sea Mining

“Our work shows the potential long-term impact of deep-sea mining when seafloor integrity is reduced,” said the team, adding that their research can be used to shape guidelines for deep-sea mining explorations.

“This is an excellent example of how scientists can guide policy makers,” said Pachiadaki.

“If there is pressure moving towards deep-sea mining, there needs to be an impact assessment,” she said. “And it can’t be a short-term process—it needs to be a long-term evaluation.”

—Sarah Derouin (@Sarah_Derouin), Science Writer

NSF Plots a Course for the Next Decade of Earth Sciences Research

Wed, 05/20/2020 - 19:49

The Earth sciences have made great strides in the past decade. New computational methods and real-time data gathering are revealing exciting and surprising results about the interconnected nature of Earth systems. At the same time, the looming threat of climate change has added an unprecedented sense of urgency to the field.

A new report released this week by the National Academies of Sciences, Engineering, and Medicine, A Vision for NSF Earth Sciences 2020–2030: Earth in Time, lays out recommendations for how the National Science Foundation (NSF) should invest in the next decade of Earth sciences research. The report highlights 12 priority questions for the field to explore from 2020 to 2030, from the deceptively simple “What is an earthquake?” to the more urgent “How can Earth science research reduce the risk and toll of geohazards?” It also highlights which of these questions already have solid foundations of support and which, like continental drilling and archiving of physical materials, NSF should develop more support for in the coming years.

“We are poised to understand the wonders of the natural world,” said Kate Huntington, an Earth sciences professor at the University of Washington. “At the same time, these questions are dealing with issues that are urgently important to the future of human societies.”

Representing Community Voices

Selecting a handful of priority science questions for a field as diverse as the Earth sciences was no easy feat. The 20 committee members were selected to span a wide range of expertise and career stages, and they met every other month for a year and a half.

The committee reviewed several dozen white papers, reports, and review papers, with each paper read by at least two committee members. The committee also held listening sessions at big conferences like AGU’s Fall Meeting, and open sessions to get input from groups that are often less well represented, like early-career researchers, researchers involved in industry, and researchers from smaller or minority-serving institutions. It also conducted an online survey that garnered over 300 comments.

“NSF is a model to other funding agencies partly because it solicits this sort of input on the needs of the research community and society.”“NSF is a model to other funding agencies partly because it solicits this sort of input on the needs of the research community and society,” said Jane Willenbring, an associate professor at the University of California, San Diego’s Scripps Institution of Oceanography who was not on the committee.

“This is a community consensus report,” said Andrea Dutton, a professor of geology at the University of Wisconsin–Madison. “We are representing the community, and this is their voice in the report.”

The committee received over 100 questions that it whittled down to just 12. To make the final cut, each of the 20 committee members had to agree that a question was both intellectually compelling and poised for a major breakthrough in the next decade. In other words, why was this important, and why now?

“In some cases, it might be a technological advance, or a conceptual advance—a new way of thinking about things—or, in some cases, it might be an urgency,” Dutton said.

The pressing need to address climate change, for instance, emerged as a thread running through many of the questions. “It’s connected to so many parts of the Earth system,” Dutton said. “There are a lot of opportunities for Earth scientists to look at this through an Earth science lens to help us respond to the rapid changes that we are experiencing now.”

“What emerged was a suite of questions that turned out to be very interconnected and to span, really, from the core to the clouds,” Huntington said. “I think this highlights the unexpected ways the different components of the Earth system connect and interact.”

New Tools and a Diverse Workforce

To address these questions, the committee made specific recommendations for what NSF should fund and how the Earth sciences workforce can be developed and supported. The members recommended funding for a national consortium for geochronology, a very large multianvil press user facility, and a near-surface geophysical center, as well as further development of SZ4D, a new initiative that studies hazards associated with subduction zones. They also recommended the exploration of possible continental critical zone and scientific drilling initiatives and the development of a community working group to build capacity for archiving and curating physical samples.

Advancing the field also requires new collaborations and an investment in a more diverse and technologically savvy workforce.Advancing the field also requires new collaborations and an investment in a more diverse and technologically savvy workforce. “I audibly cheered when I read the many callouts to the pressing need to study the dynamic interaction of life, landscape, water, and climate, and how studies of the Earth system require collaborations with scientists typically funded by other GEO divisions and other funding agencies,” Willenbring said.

She also noted that she was pleased to see the report explicitly call out harassment and discrimination as reasons for lack of diversity in the field. “We can’t just get people through the geoscience door. We have to make them feel at home. To me, ‘at home’ means welcome, safe, and valued,” Willenbring said.

“This is an all-hands-on-deck moment for Earth science,” Huntington said. “We need to be demographically broad and bring those diverse perspectives needed to advance the science…. We have this opportunity to really change business as usual and change the way Earth science embraces diversity and inclusion in how we make the workforce of the future.”

—Rachel Fritts (@rachel_fritts), Science Writer

Cold Cuts: Glaciers Sculpt Steep Peaks

Wed, 05/20/2020 - 12:28

Mountain ranges in cold parts of the planet have classic, glacier-shaped good looks: sheer pinnacles that rise above U-shaped valleys and round basins called cirques. Some scientists have argued that glaciers limit the height of mountains by shaving off their tops, yet tall, horn-shaped peaks still dot the landscape in cold climates.

A new global analysis of peaks suggests that rather than limiting mountain height at high latitudes, glaciers make these peaks pointier and taller.A new global analysis of peaks suggests that rather than limiting mountain height at high latitudes and altitudes, glaciers may make these peaks pointier and taller. By carving out U-shaped valleys, glaciers lighten the load of the mountain range, allowing the scraped-out piece of crust to float higher on Earth’s fluid mantle, like an unloaded boat.

“If you have very steep peaks and large, deep valleys, the load of the mountain range above the sea level is quite low,” said Jörg Robl, a geodynamics modeler at the University of Salzburg in Austria, “so even a thinner crust is sufficient to support very high peaks.”

Balances and Buzz Saws

A mountain’s elevation results from a balancing act between three main factors: tectonic uplift, isostasy, and erosion. Mountain ranges form when tectonic plates collide, forcing the crust to buckle and rise. The weight of accumulated rock causes the area to sink because of isostasy, iceberg-like behavior in which heavier chunks of crust float lower in the mantle than lighter ones. Erosion from water and ice wears away the mountains, counteracting the rise from tectonic forces but also leading to a boost from isostasy.

Since the late 1990s, scientists have debated whether glaciers limit a mountain’s height by shaving off the area above the snowline, a hypothesis called the glacial buzz saw. Others have argued that glaciers shield peaks from erosion, allowing the mountain to grow higher from tectonic forces. “It’s a big and hot and sometimes really brutal debate,” said Robl.

Jörg Robl was inspired to create a global data set of mountain peaks after climbing the north face of Westliche Zinne, a peak in the Italian Dolomites. Credit: Kurt Stüwe

In a new paper in Earth and Planetary Science Letters, Robl and colleagues at the University of Freiburg in Germany and the University of Lausanne in Switzerland performed a global analysis of 16,000 peaks to identify relationships between mountain shape, peak height, crustal thickness, and climate. They report that high, steep peaks occur in heavily glaciated regions, suggesting that these mountains missed the buzz saw.

Robl came up with the idea while mountain climbing at Westliche Zinne, a peak in the Italian Dolomites. “I was sitting on top of it and looking at all the mountain peaks and thought it might be a good idea just to have a look at spatial distribution,” he said.

Global Trends in Glacial Landscaping

Using satellite data from NASA, the team used a novel method to identify all peaks greater than 2.5 kilometers that rise more than 500 meters above the surrounding area. By imagining these peaks as cones and measuring the width of the base, they could determine steepness: The narrower the base of the cone was, the pointier the peak was. The scientists compared the peaks at different latitudes to the thickness of the underlying crust using an existing database of global crustal thickness.

Latitudes near 30°N and 30°S had the tallest peaks supported by the thickest crust, including mountains in the Himalayas and the Andes. Moving from 30° toward 60° latitude, mountain ranges had lower elevations overall but still supported tall, increasingly pointy peaks on thinning crust, such as Denali in the Alaska Range.

High latitudes have cold climates, so the scientists propose that the grinding action of millions of years of glaciers is responsible for the thinner crust underlying these mountains. .

Unlike high peaks in the middle latitudes, peaks at higher latitudes are supported by surprisingly thin crust. Credit: Jörg Robl

. “I think they raised really interesting questions about the relationship between crustal thickness and glaciation—but didn’t answer them,” said Lindsay Schoenbohm, a tectonic geomorphologist at the University of Toronto Mississauga who was not involved in the research. She wants to see evidence on a smaller scale showing the effects of glacial erosion on crust thickness over time to back up these global correlations. Such evidence would be “pretty wild,” she said.

“The strength of this approach is its global character, but it’s a controversial study when it tries to link peak climate with latitudinal trends and crustal thickness.”“The strength of this approach is its global character,” said Dirk Scherler, an Earth scientist at the GFZ German Research Centre for Geosciences at Helmholtz Centre Potsdam. “But it’s a controversial study when it tries to link peak climate with latitudinal trends and crustal thickness.” Scherler points out that by lumping mountain peaks together by latitude, the analysis risks overlooking other factors that affect topography and crust thickness, such as tectonic setting.

Now that Robl and his colleagues can model changes in topography driven by isostasy in response to erosion, they are incorporating those changes into their existing model of how mountain ranges evolve. By modeling climate, fluvial erosion from rivers, glacial erosion, and the feedback loops that occur when these processes interact, they hope to understand how these forces have shaped Earth’s stunning peaks.

—Patricia Waldron (@patriciawaldron), Science Writer

AquaSat Gives Water Quality Researchers New Eyes in the Sky

Wed, 05/20/2020 - 12:25

Nandita Basu studies how human activity can impact water quality, specifically how nutrient runoff can impact large areas. Think of the Mississippi River basin or the Chesapeake Bay watershed. Much of the work Basu, a professor of water sustainability and ecohydrology at the University of Waterloo in Canada, does looks at nitrogen and phosphorus concentrations in streams and rivers and then links them to sources in the landscape, such as agricultural land use.

“When you work with these water quality data, one thing that immediately becomes really evident is the lack of data. There’s millions of streams, and there are only so many that we can go take samples from all the time.”It’s work that necessarily depends on physical sampling of water in the field, but as Basu notes, researchers quickly find fundamental limits in this type of work.

“When you work with these water quality data, one thing that immediately becomes really evident is the lack of data. There are millions of streams, and there are only so many that we can go take samples from all the time,” she said.

By necessity, researchers who study water quality end up using models input with what information they have, “but often, those models are not really grounded in data, you can’t really trust them,” she said.

That’s why Basu is so excited about AquaSat, a new data set from researchers at Colorado State University, the University of North Carolina, and others that correlates water quality samples from U.S. rivers, streams, and lakes with more than 30 years of remote sensing images taken by Landsat satellites operated by NASA and the U.S. Geological Survey.

“The AquaSat data set is absolutely amazing,” she said. “I can imagine using it quite extensively.”

Remote Eyes on Water Quality

Matthew Ross, an assistant professor of ecosystem science and sustainability at Colorado State University, is the lead author on a 2019 paper in Water Resources Research detailing the AquaSat project and started his career taking water quality field samples. As a postdoctoral researcher in Tamlin Pavelsky’s lab at the University of North Carolina at Chapel Hill, however, Ross became interested in using satellites for larger-scale measurements. “I was sort of surprised that more people weren’t using remote estimates of water quality,” he said.  

The eight Landsat satellites have provided continuous and global imaging of terrain since 1972, and although those missions have focused on land, Ross and his colleagues realized there should be “optically relevant” parameters in images of water too. “That’s things that should change the color of water,” he said. For AquaSat, they were interested in chlorophyll a, a measure of algae in water that turns it green; sediment, which can yield a tan color; dissolved carbon, which can darken waters and is a measure of carbon leached from the landscape; and Secchi disk depth, a measure of total water clarity. .

AquaSat’s “Data Symphony” combines data from Landsat satellites and sample data from the U.S. Water Quality Portal and the Lake Multi-scaled Geospatial and Temporal Database (LAGOS) data set. Credit: Matthew Ross

. “It gives you a ground truth. It’s basically a way to calibrate models that are using Landsat to estimate water quality parameters.”Ross and his colleagues then correlated images taken by Landsat 5, 7, and 8 between 1984 and 2019 with on-the-ground samples of the imaged bodies of water that measured the optically relevant parameters. Researchers pulled sample data from the U.S. Water Quality Portal and the Lake Multi-scaled Geospatial and Temporal Database (LAGOS) data set, both of which record water quality measurements in U.S. streams, rivers, and lakes. The resulting 600,000 matchups of remote sensing and sample data allow for more reliable predictions of water quality based on future Landsat imaging alone.

“It gives you a ground truth. It’s basically a way to calibrate models that are using Landsat to estimate water quality parameters,” Ross said. “We can use these more data-rich, empirically driven ways of prediction that previously weren’t available because no data set like this existed before we made it.”

Applications and Accessibility

“With this data set we can look at all of these lakes and rivers and look at the water quality trajectories over time,” Basu said. For instance, researchers can track the water quality in a particular river over a 30-year period and correlate it with land use and farming practices in the surrounding landscape to estimate their impact. “Maybe,” she noted, “the farming practices have not changed that much, but maybe it’s climate that’s changing the conditions.”

Ross hopes to do more than just provide a new and useful data set for other water quality researchers. “Our goal is to make it a lot easier for anyone to use [the AquaSat data set] to build models that predict water quality,” he said.

He has already seen some evidence that is happening. The AquaSat data set has been shared openly on Figshare (an open-access repository where researchers can preserve and share figures, data sets, images, and videos), where it has attracted some amateur attention.

“I’ve gotten a bunch of high school and early college computer science folks emailing me about how to train neural nets on our data,” Ross said. “Those emails are always exciting because of the idea of there’s a bigger community that can engage with the data in an easier way.”

The ultimate goal is to create a user-friendly interface that could be used by water quality and environmental professionals to make decisions about water resources.Right now, building models and making water quality predictions require some coding skills, but Ross said the ultimate goal is to create a user-friendly interface that could be used by water quality and environmental professionals to make decisions about water resources, such as reservoirs. “Getting these data and ideas into the hands of municipalities is certainly one of my long-term goals,” he said.

Beyond creating more user-friendly access to AquaSat going forward, Ross says he hopes to extend the data set with additional satellite imagery, such as the NASA Moderate Resolution Imaging Spectroradiometer (MODIS), satellites, and future missions.

“I’d say the biggest game changer for doing full stack hydrologic sciences from space is the SWOT mission, which is launching 2022,” he said. The Surface Water and Ocean Topography satellite will provide the height of large rivers and lakes. These data, according to Ross, could be combined with Landsat color information to allow researchers to do things like estimate the discharge and sediment volume in an ungauged river.

But the future projects Ross is most excited about involve getting enough on-the-ground data to validate satellite imagery in parts of the world that have little water quality data available to begin with. “In places that are changing rapidly, like in Honduras or Brazil, South Africa or other places, going back in time with Landsat satellites there is incredibly valuable,” he said. “To me, that’s one of the biggest value adds and why it’s so important to make this data set global, so we can validate a more global model.”

—Jon Kelvey (@jonkelvey), Science Writer

All Hands on Deck for Ionospheric Modeling

Wed, 05/20/2020 - 12:21

The ionosphere, the ionized layer of Earth’s atmosphere far above the stratosphere, plays vital roles in many applications of modern technology. Radio signals travel through the ionosphere, for example, as do some spacecraft. Space weather events that direct energetic charged particles and radiation from the Sun toward Earth interact with the ionosphere, and even moderate space weather events can cause ionospheric conditions to change substantially. These changes affect the reliability of systems integral to society, such as GPS, telecommunications, power grid distribution, and even pipelines that transport oil, gas, and water (by causing corrosion of the pipes).

We have an opportunity to launch a collaborative space weather forecasting effort to facilitate the protection of critical infrastructure, national security assets, and the safety of civil aviation.Ionospheric conductance—the ease with which electrical currents driven by space weather processes travel through the ionosphere—controls how severe the impacts of such events can be [Harel et al., 1981]. Without a thorough and systematic understanding of ionospheric conductance, which can vary spatially and over time, it is not possible to forecast and mitigate resulting disruptions. The challenges of achieving this understanding are too complicated for individual scientists or research groups to confront alone, so we need community-wide engagement.

We now have an opportunity to launch a collaborative forecasting effort to facilitate the protection of critical infrastructure, national security assets, and the safety of civil aviation. This past January, the U.S. House Committee on Science, Space, and Technology approved the Promoting Research and Observations of Space Weather to Improve the Forecasting of Tomorrow (PROSWIFT) Act, which entails establishing an interagency working group on space weather forecasting. Under this legislation, which is currently awaiting consideration by the full House of Representatives, roles and responsibilities would be assigned to various government agencies and departments as they work together on improving space weather research and forecasting.

Challenges of Modeling Ionospheric Conductance

A validation-related focus group has been an integral part of the National Science Foundation (NSF) Geospace Environment Modeling (GEM) community since 2005. As of 2019, Methods and Validation (M&V) became a standing committee to help guide and support various modeling efforts. The focus group held the Ionospheric Conductance Challenge Session at the mini-GEM meeting in December 2019 to solicit community input on the biggest challenges faced by researchers working to understand ionospheric conductance. Researchers identified challenges in three major categories in this open-format discussion: a lack of ground truthing, a lack of collaboration, and a lack of funding.

During last December’s Ionospheric Conductance Challenge Session, participants provided real-time input—represented in this word cloud—on the biggest challenges in ionospheric conductance studies. ISR = incoherent scatter radar.

“Ground truth” often refers to information obtained by direct measurements, rather than from simulations. Having a designated ground truth data set is a critical component of validating and verifying physics-based models, calibrating instruments, and interpreting observed or simulated phenomena. However, because ionospheric conductance cannot be directly measured, researchers use measurements of other variables, statistical averages, and physical approximations to provide conductance estimates [Richmond and Maute, 2014]. These procedures introduce serious uncertainties into the final estimates because the proxy data sets used are sometimes incompatible with each other (e.g., if measurements were not taken at the same locations or times) and often are limited in spatial and temporal coverage.

Rewarding prosperous collaborations and the exchange of expertise among different groups are essential to moving the field forward.Studies of ionospheric conductance involve tools and methods from various research areas that are traditionally separated. Rewarding prosperous collaborations and the exchange of expertise among these different groups are essential to moving the field forward. Interactions should be facilitated and encouraged between modelers and instrument scientists, between empiricists and first-principles modelers, between researchers who study the ionosphere and those who study the magnetosphere, and between physicists who study neutral particles and those who study plasmas. In the past, joint workshops between the GEM and NSF Coupling, Energetics and Dynamics of Atmospheric Regions communities have helped significantly in bringing together these communities, but further opportunities are needed.

Recognition of ionospheric conductance as a focused science objective is the most effective way to encourage the large-scale and cross-disciplinary collaboration needed to advance data-model validation efforts. Without this agency-level focus, at present, individual research groups study ionospheric conductance in different regions or under different drivers, with only the techniques and measurements already available to them. Therefore, determining conductance values and patterns often emerges as a side problem as researchers focus on more feasible scientific objectives in grant applications. Consequently, many researchers end up having insufficient resources to take on the challenges of studying conductance and have difficulty finding suitable collaborations to help them further their work.

High-Priority Tasks to Meet the Challenges

In the Ionospheric Conductance Challenge Session last December, participants identified three high-priority tasks for the community: quantifying uncertainties, coordinating efforts, and improving models.

Participants in the Ionospheric Conductance Challenge Session were surveyed about their views on where collective effort should be invested to address the most pressing challenges facing ionospheric conductance research.

The highest priority that the group identified is the need to systematically quantify uncertainties in the different conductance models and data sets available. The NASA Community Coordinated Modeling Center (CCMC) and the GEM M&V focus group are in ideal positions to lead such a cross-disciplinary effort. The M&V focus group organizes Ionospheric Conductance Challenge sessions at the GEM workshops, which provide a great venue for researchers to share their findings and form new collaborations. The CCMC provides online tools for a variety of different numerical models in addition to a data-model database for space weather events designated by the M&V focus group. Through this database, researchers can exchange measurement data and model results. Such platforms expedite uncertainty quantification and provide a basis for validation and verification processes.

A high-priority task is to compile a list of space weather events known as “challenge” or “campaign” events that researchers can work on to better detail ionospheric conductance profiles.Another high-priority task is to compile a list of space weather events—known as “challenge” or “campaign” events for which there are multiple data sets available and space weather–studying spacecraft are in ideal positions to observe—that researchers can work on to better detail ionospheric conductance profiles. Tackling this challenge will require coordinated efforts and extensive input from researchers across a range of institutions and scientific fields. The M&V focus group aims to provide researchers with tools to better understand ionospheric conductance while increasing community engagement and collaboration through these coordinated efforts. Such studies pave the way for identifying the specific observational and computational requirements for improving our theoretical understanding.

The third high-priority task identified is to advance global and local physics-based models of conductance. The theoretical understanding achieved with these efforts benefits the public in the form of improved operational tools, in this case, space weather forecasting models. These models cannot be advanced without systematic validation and verification efforts that require ground truth, collaboration, and funding.

A Path to Progress

When ionospheric conductance is recognized as a key science topic, the community can make substantial progress in improving space weather forecasting models. Targeted solicitations and funding opportunities that foster collaborations between federal agencies, academia, and commercial companies will provide researchers with the resources they need. These steps are crucial for meeting the need for accurate, predictive ionospheric conductance modeling, which has long been a challenge in space weather forecasting.

Acknowledgments

This work was performed at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA.

Scientists Float a New Theory on the Medusae Fossae Formation

Tue, 05/19/2020 - 12:32

Pumice rafts can “travel 5–8 kilometers on Earth. There’s no conceptual reason why they couldn’t do that on Mars.”Rafts of pumice-like material, similar to rafts found floating in Earth’s Pacific Ocean, may be responsible for some of the most enigmatic terrain on Mars. A new theory suggests that low-density rocks could have slid down the slopes of the Red Planet’s largest volcano to create giant rocky rafts in the planet’s early ocean. Ultimately, researchers suggest, the material washed up on shore to create the expansive region known as Medusae Fossae.

Geologists have been trying to understand how Medusae Fossae formed since NASA’s Mariner spacecraft first observed the region in the 1960s. A variety of mechanisms have been suggested, but each mechanism comes with its own set of challenges that keep it from fully explaining the deposit.

Now a new idea has surfaced. Pumice rafts can “travel 5–8 kilometers on Earth,” said Peter Mouginis-Mark, a planetary scientist at the University of Hawai’i at Mānoa and lead author on the new paper. “There’s no conceptual reason why they couldn’t do that on Mars.”

In August 2019, a massive pumice raft was spotted in the Pacific Ocean. Credit: NASA Earth Observatory/Joshua Stevens/U.S. Geological Survey

When molten rock from underwater volcanoes interacts with ocean water, the exploding fragments of lightweight rock float to the surface as pumice rafts. These extensive formations can be quite large; last summer, a pumice raft roughly 200 square kilometers in size caught worldwide attention as it traveled toward Australia. Mouginis-Mark and his colleagues suspect that similar volcanic processes could create a pumice-like raft on Mars by interacting with ice locked beneath the surface.

The new theory brings its own challenges. Mouginis-Mark admits that there is “a lot of hand waving” in the discussion. But he said the potential problems are no worse than those found in other contending theories for the origin of the unusual feature.

“If [the process] works in the same way that it does on Earth, it is a good possible explanation for how these deposits—[for] which no one’s come up with a definite way to produce—could have formed,” Mouginis-Mark said.

The new study was published this month in the journal Icarus.

A “Very Different Mechanism”

The Medusae Fossae Formation covers 2.2 million square kilometers near the Martian equator. Previous observations revealed that it is an unusually porous region, about two thirds as dense as the rest of the planet’s crust. Those same observations revealed that ice beneath the surface was insufficient to account for the shockingly low density. Instead, some researchers suggest that explosive volcanism could have created such porous rocky material.

But getting ash and rocky debris from the volcanoes closest to Medusae Fossae poses a problem, according to Mouginis-Mark. In the thin Martian atmosphere, the material would have fallen out of the eruption plume before it reached the formation.

According to Bruce Campbell, a planetary scientist at Smithsonian’s National Air and Space Museum in Washington, D.C., the sheer size of Medusae Fossae adds an additional challenge. “It’s very difficult for any of these models…to easily explain the vast volume of the final deposit,” he said. Campbell was not part of the new study.

An ocean could potentially carry material from the giant volcano Olympus Mons to the distant shores of Medusae Fossae.One of the intriguing features around the formation is a proposed shoreline for an ancient ocean. Early in its evolution, Mars is thought to have hosted such an ocean, although that theory is also subject to ongoing debate. Mouginis-Mark realized that an ocean could potentially carry material from the giant volcano Olympus Mons to the distant shores of Medusae Fossae. The researchers targeted Olympus Mons as the source of the volcanic material because its base is at a much lower elevation than other Martian volcanoes, Mouginis-Mark said.

Unlike pumice rafts on Earth, which burble up from underwater and immediately begin their travels, Martian rafts likely sat on the slope for some time and wound up covered by material from later eruptions of denser material. Eventually, a landslide would drag them all the way to the base of the volcano, where an ocean would allow them to drift long distances.

“The basic processes which we think we can see on Mars at Olympus Mons are known to happen on Earth in Hawaii and at other volcanic islands,” Mouginis-Mark said.

As time passed, the rafts traveled to shore, where they piled up in a giant mass of pumice-like material. Again, this happens frequently on Earth.

“There are lots of places on Earth where there is evidence of repeated deposition of pumice from the breakup of floating rafts,” said researcher Patrick Nunn, an oceanographer at the University of the Sunshine Coast in Australia who studies pumice rafts. Pumice rafts on Earth are usually split into pieces by waves and offshore reefs long before they reach land. “What you find on the coast is often layers of pumice from broken-up rafts at different places and elevations, signaling the repeated deposition of pumice,” he said. Nunn was not part of the new study.

According to Campbell, “it’s a very different mechanism from the more classic air fall deposits” also proposed as solutions to the Medusae Fossae mystery.

No Smooth Sailing for Any Theory

Like other models hoping to solve the mystery of Medusae Fossae, the pumice raft idea brings its own challenges. One of the biggest problems comes from measurements of the feature, which reveal that it is too dense to float on water. Campbell said it’s possible that the porous material compacted after it sat on the shoreline.

“It’s a problem, but there are probably ways of getting around it,” he said. “It’s far less of a problem than any of the other explanations for the origin of [Medusae Fossae].”

Another issue is a lack of landslide scars on Olympus Mons, as well as an explanatory mechanism for what could have triggered them. While Mouginis-Mark agrees that this is a problem, he points out that other researchers have invoked giant landslides as well.

The final pileup of pumice-like material also brings its own challenges. On Earth, pumice rafts pile up in collections that reach around 10 meters high at a maximum, far more than would be required to build up Medusae Fossae. Mouginis-Mark points out that the low gravity and the million-year timescale, as well as the unknown volume of original material and the uncertain state of an ocean that could have been at least partially covered in ice, make an apples-to-apples comparison with Earth a challenge.

The new theory comes with a handful of ways to disprove it. If researchers determine that the Red Planet never had an ocean, or if the Medusae Fossae feature is found to have formed while lava from Olympus Mons was still warm, the idea of pumice rafts can clearly be thrown out.

“Yes, our model has problems,” Mouginis-Mark said. “But none are any more challenging than the alternative explanations for the origin of the Medusae Fossae feature, and people have been thinking about these ideas for the last 40 years with no luck. We therefore think it is worth proposing a new idea for the origin of Medusae Fossae, particularly one that combines many aspects of the geology of Olympus Mons into a coherent picture.”

—Nola Taylor Redd (@NolaTRedd), Freelance Science Journalist

McEnroe Receives 2019 William Gilbert Award

Tue, 05/19/2020 - 12:25
Citation Suzanne A. McEnroe

I am delighted that the AGU Geomagnetism, Paleomagnetism, and Electromagnetism (GPE) section has chosen Suzanne McEnroe to receive the 2019 William Gilbert Award. In her career, Suzanne has carried out groundbreaking fundamental research on magnetic anomaly sources in the crust, with implications that reach broadly throughout the Earth and planetary sciences. Her interest has been particularly focused on the significant negative, remanence-dominated magnetic anomalies that occur in various locations. Her careful analyses showed in multiple cases that this remanence originated primarily in crystals of the hematite-ilmenite series, containing abundant fine-scale exsolution structures. Although a few earlier studies had also reached this conclusion, they had not been able to explain the combination of very high magnetic stability and high magnetic intensity. The key insight from Suzanne and her collaborators was that the exsolution microstructures do not merely affect the properties of the host minerals, but are themselves the source of the strong and stable remanence, through a previously unknown interfacial ordering mechanism termed lamellar magnetism.

Two principal factors have propelled her continuing studies of these remanence sources to the forefront of mineral magnetic research: first, her comprehensive approach to characterization (using scanning electron microscopy, transmission electron microscopy, magnetic microscopy, Mössbauer spectroscopy, low-temperature high-field magnetometry, and theoretical modeling), and second, her initiative in developing and coordinating collaborations involving talented specialists in each of these techniques. Two brilliant series of papers have comprehensively illuminated the phenomenon of lamellar magnetism in hematite-ilmenite nanocomposites, as well as the complex chemical and magnetic ordering in metastable homogeneous mineral phases over the same range of bulk compositions. These papers collectively represent one of the greatest achievements in mineral and rock magnetism over the past 2 decades.

I am honored to congratulate Suzanne McEnroe, our 2019 William Gilbert Award recipient, for her leadership in research that has transformed our understanding of mineral magnetism, paleomagnetic field records, and geomagnetic field anomalies.

—Michael J. Jackson, University of Minnesota, Twin Cities, Saint Paul

 

Response

I am honored to receive the William Gilbert Award from the AGU GPE section for my work on magnetic anomalies and mineral magnetism. Working to understand the nature and sources of magnetization from unusual magnetic anomalies at the kilometer scale led to scientific breakthroughs in mineral magnetism at the atomic scale. The anomaly hunt started when I was a postdoc and picked up a discarded paper on oxide mineralogy and anomalies in the Adirondack Mountains by Balsley and Buddington, who noted that the source of the negative anomalies was elusive. Reading this classic paper changed my scientific direction. I went to the Adirondacks, collected samples, and concluded that these magnetic anomalies could not be interpreted using traditional magnetic mineralogy concepts.

Work on remanent anomalies and mineralogy continued in Sweden, Norway, and Australia. To account for the bulk magnetization found in rocks where the source was ilmenite-hematite, a study of the interactions at micro and atomic scales was required. This led to the theory of lamellar magnetism, which required a change in our thinking from a bulk (volume) magnetization to a surface magnetization at interfaces, as in ilmenite-hematite exsolutions. This interface magnetization could result in a large remanent magnetization due to the fine scale of exsolution lamellae (from micrometer to nanometer size), providing an abundant surface magnetization. The next aspect was finding the nature of the coercivity of these mineral intergrowths. Tackling these questions required scientific collaborators from disciplines in mineralogy, physics, chemistry, and crystallography. I have been extraordinarily fortunate to work with highly talented scientists. I am very grateful for access to the Institute for Rock Magnetism, for rock-magnetic measurements, and to the Bayerisches Geoinstitut in Germany for experimental work with Falko Langenhorst, Catherine McCammon, and Nobuyoshi Miyajima, and for my long-term collaborators Laurie Brown, Karl Fabian, Richard Harrison, and especially Peter Robinson. Peter was the giant and the cornerstone of all this research.

—Suzanne A. McEnroe, Institute of Geoscience and Petroleum, Norwegian University of Science and Technology, Trondheim, Norway

Tracking Tropospheric Ozone Since 1979

Tue, 05/19/2020 - 12:24

Ozone is crucially important in Earth’s atmospheric chemistry. The planet’s protective ozone layer, which exists mainly in the stratosphere at altitudes between 15 and 35 kilometers, absorbs ultraviolet radiation from the Sun, shielding the surface from hazardous high-energy light. However, ozone is also found beneath the stratosphere in the troposphere, where the molecule behaves more like a greenhouse gas and contributes to global warming. Ozone concentrations in the troposphere result from an interplay between transport from the stratosphere above and ozone production driven by emissions at the surface.

In a new study, Griffiths et al. model the atmospheric chemistry of ozone in the troposphere and stratosphere between 1979 and 2010 using the U.K. Met Office’s Unified Model together with data on emissions, historical meteorological conditions, and sea surface temperatures. The scientists quantified effects on tropospheric ozone during that time from both increasing emissions of ozone precursors and losses of stratospheric ozone resulting from the prevalent human usage of chlorofluorocarbons (CFCs), atmospheric pollutants famously responsible for depleting the ozone layer.

Overall, their results show that ozone production in the troposphere from anthropogenic sources increased during the study period, with the added pollution almost counterbalancing a decrease in stratosphere-to-troposphere transfer of ozone: Only a slight decrease in tropospheric ozone was detectable, with the two competing factors nearly canceling each other out. More recently, however, between 1994 and 2006, as the effects of bans on CFCs began to influence stratospheric ozone levels, the group’s model shows a slight increase in tropospheric ozone.

With the ozone layer recovering, a strengthening of the stratosphere-to-troposphere transport of ozone expected under climate change, and ozone precursor emissions continuing to rise in many places, the team says the results highlight the importance of studying ozone transport from the stratosphere to the troposphere—especially in the midlatitudes in spring, when atmospheric conditions favor such a downward flow of ozone. (Geophysical Research Letters, https://doi.org/10.1029/2019GL086901, 2020)

—David Shultz, Science Writer

Once Again into the Northwest Passage

Tue, 05/19/2020 - 12:21

Early European explorers of the New World searched in vain for an easy sea route between the Atlantic and Pacific Oceans. The search for a Northwest Passage across the Arctic Ocean claimed many ships and lives, until Roald Amundsen and his crew finally succeeded in making an all-water crossing of this hazardous route in 1906. One consequence of recent rapid Arctic warming is that the Northwest Passage is now ice free (or nearly so) for a longer period of time each year, and establishing shipping routes in this region is no longer a pipe dream.

Although the Arctic Ocean has become a research focus for many scientists, the relatively inaccessible Canadian Arctic Archipelago, an important portion of the Northwest Passage, has received less attention.Rising temperatures and declining sea ice have driven rapid changes across the Arctic and opened a passage to shipping routes, cruise ships, and research vessels. The rate of change in the Arctic is faster than in any other region on Earth, and this change is likely to have profound effects on the function and structure of Arctic ecosystems, including in the Arctic Ocean. Although the Arctic Ocean has become a research focus for many scientists, the relatively inaccessible Canadian Arctic Archipelago, an important portion of the Northwest Passage and a principal connection between the Pacific and North Atlantic Oceans, has received less attention.

Over the past 3 years, researchers and undergraduate students at the University of Illinois at Chicago (UIC) have prepared and joined the Northwest Passage Project, a research and cultural expedition to the Northwest Passage funded by the National Science Foundation and the Heising-Simons Foundation. Participants in the project are seeking to understand how this region is changing as a result of climate change, with the aim of further spreading awareness about the severity and global effects of these changes. The program also seeks to provide a comprehensive research experience in which participants become facilitators of science—no matter what field they go into, this experience will inform their scientific research, science teaching, or sharing science with their communities. Here we present the UIC program and the educational experience from the student participant perspective.

Setting Sail

The diversity in academic backgrounds provided us with a holistic perspective into science research experiences.Last summer, 23 graduate and undergraduate students from minority-serving institutions across the United States, in collaboration with the Swedish Polar Research Secretariat, participated in an unprecedented 3-week expedition as a part of this program. Among this group were five UIC students who boarded the Swedish icebreaker research vessel I/B Oden in Thule, Greenland. The expedition intended to follow the path of a previous failed expedition led by Sir John Franklin in 1845 (Figure 1). For the student participants, the journey before, during, and after our time on the Oden has been long and full of surprises, ordeals, and rewards. This journey has taught us resilience and perseverance, and it has encouraged us to reevaluate our motivations for our career and personal goals.

For the UIC contingent of the expedition, biology professor Miquel A. Gonzalez-Meler and Earth and environmental sciences professor Max Berkelhammer initially recruited 18   undergraduate students majoring in science, technology, engineering, and mathematics (STEM) subjects; social sciences; and humanities. The diversity in academic backgrounds provided us with a holistic perspective into science research experiences, covering not only science but the Arctic economy, anthropology, and science communication. We look forward to carrying these insights into future professional networks, regardless of our professional goals or nonacademic career track.

Fig. 1. Ship track and sampling locations during the 2019 Northwest Passage Project expedition. Continuous greenhouse gas stable isotope measurements were collected along the ship track, along with conductivity-temperature-depth (CTD) transects, ice cores, and surface water salinity and stable isotope measurements. The expedition started from Greenland, crossed Baffin Bay to Pond Inlet, Nunavut, and traveled through Lancaster Sound to Resolute. Click image for larger version. Credit: Google Maps

The UIC students participated in the Northwest Passage Project either by conducting research aboard the cruise through the Northwest Passage or by completing paid research internships at UIC or other institutions designed to support the expedition (e.g., helping with communications efforts or affiliated research projects). In addition, all participants engaged in a program at UIC that included course work in an active learning setting (which emphasizes class interactions and collaborative problem solving), independent research, and the UIC-funded internships.

Although this 21st century program did not suffer the hardships encountered by the 1845 Franklin expedition, in which all 129 crew members abandoned ship and most were never found, getting it launched was not without difficulties. An expedition on a three-mast sailing ship planned for the summer of 2017 had to be canceled because of difficulties in making arrangements with the company that owned the ship.

In the summer of 2018, six UIC students, including three of the authors, traveled to Kugaaruk, Nunavut, Canada, for a second attempt at a research cruise. Unfortunately, after the first day of sailing, their ship, the Akademik Ioffe, ran aground after striking a shoal in uncharted waters. After 26 hours stranded at sea, the students were rescued and returned safely home. After the grounding, it was unclear whether the project would continue, and morale was low. However, we persevered, staying involved in various research projects in UIC labs.

Despite these setbacks, most of the students from the initial 2017 recruitment, with a few new recruits to replace those who had graduated, were still on board with the project when the efforts paid off in 2019. At last, we set sail successfully and could begin working on the research activities we had prepared for over the past 3 years.

A student shoots video for a live broadcast from the Arctic. Credit: Frances Crable Sampling the Air and Sea

The UIC students aboard the successful 2019 expedition had two main objectives. First, we wanted to document greenhouse gas exchange between the ocean and the atmosphere and assess how Arctic climate change is affecting carbon fluxes between these two reservoirs. We did this by collecting data on the isotopic compositions of greenhouse gases in the atmosphere and organic matter in the water column. Second, we wanted to evaluate salinity and water isotopic information to determine how terrestrial processes and the influence of freshwater inputs are affecting marine ecosystem properties.

We conducted these activities in collaboration with the University of Rhode Island (URI), Norwegian Institute for Water Research (NIVA), and Stockholm University. Scientists from URI were aboard the cruise, and we plan to share our data with collaborators at NIVA and Stockholm who provided funding and equipment.

Students look at water column data from a CTD rosette cast. Credit: Frances Crable

To achieve our objectives, we learned a variety of sampling techniques, including conductivity, temperature, and density (CTD) rosette sampling; continuous underway atmospheric greenhouse gas (carbon dioxide and methane) measurements (taken continuously while the ship was moving); sea ice coring; bucket sampling; and precipitation sampling.

We are assessing elemental and isotopic compositions of our water and air samples to interpret the chemical contributions from cryosphere, land, and freshwater exports into the Arctic Ocean and the interactions between them. Our observations are likely to provide a valuable baseline of carbon source, sink, and inventory data with which to validate carbon flux measurements from monitoring towers on land and from remote sensing data, which have been used to monitor the changing Arctic environment.

The Human Factor

We also participated in outreach activities and visits to archeological sites and Inuit communities. Students visited Beechey Island, where graves of four of the 1845 Franklin expedition crew are located. We also studied the history of Arctic research in this region with the help of historian Hester Blum and environmental journalist Ed Struzik, who were also aboard the cruise.

One of the most impressive activities during the expedition was our visit to Pond Inlet, Nunavut, where we met with Inuit community members and learned about their ancestral knowledge of Arctic habitats. We also met with local students who are members of Ikaarvik, a program that works with Arctic youth to create a bridge between scientific research and their communities.

Two students collect ice core samples for nutrient and greenhouse gas analysis. Credit: Frances Crable

These interactions with students from different backgrounds fostered interdisciplinary and international connections that have expanded the scientific networks of all the students and enhanced our scientific research through the incorporation of Inuit ancestral knowledge. These interactions also inspired Northwest Passage Project students to pursue science that reaches and incorporates people from underrepresented communities around the world being affected by climate change. Two of us are now premed students studying the ways in which climate change affects human health, especially in remote and developing countries.

Other outreach activities included live broadcasts that the students produced with the help of the Inner Space Center. We shared our experiences, on and off the ship, in real time through daily blog posts, Facebook live broadcasts (see video below), and scheduled broadcasts to museums in the United States. These activities allowed student participants to show their unique perspectives on what scientific research looks like, further the impact of our research, and spread awareness about the changing Arctic.

Home Again and Hard at Work

After returning home following the research expedition last summer, we continued our involvement with the Northwest Passage Project at our institutions. UIC students are currently working with professor Gonzalez-Meler’s Stable Isotope Laboratory and professor Berkelhammer’s Atmosphere, Climate and Ecosystems Laboratory to analyze data and see what we can learn from this hard-to-access yet vital region.

These experiences have left all of us with a greater appreciation for Arctic science and culture.Students also presented research results at national scientific meetings and conferences, including the recent Ocean Sciences Meeting 2020, cosponsored by AGU, the Association for the Sciences of Limnology and Oceanography, and The Oceanography Society, where UIC students gave two poster presentations. For some of us, it was our first scientific conference experience; we were excited to network with other ocean scientists, and we got valuable feedback on our research presentations.

The Northwest Passage Project has provided a critical and inspirational learning experience for us as early-career scientists and citizens by providing an educational background and an extensive field research experience, and it enabled international connections that we would not have made otherwise. These experiences will enrich our lives as we begin careers in medicine and health, science, education, and other fields, and they have left all of us with a greater appreciation for Arctic science and culture. We hope our positive experience in this expedition will inspire others to organize ambitious new research projects that integrate research, education, and public engagement and provide unique outlets to share these experiences with society.

Author Information

Frances Crable (fcrabl2@uic.edu), Cynthia Garcia-Eidell, Theressa Ewa, Humair Raziuddin, and Samira Umar, University of Illinois at Chicago

From Blowing Wind to Running Water: Unifying Sediment Transport

Mon, 05/18/2020 - 12:30

Geomorphologists seek to understand why natural landscapes look the way they do. Key to this is understanding sediment transport, especially the thresholds at which sediments start and stop moving, and the rate at which sediments move around in different conditions. A recent article in Reviews of Geophysics presented an overview of the physics of sediment moved by wind and water. Here, the authors give an overview of advances in our understanding of sediment transport.

What are the main ways that loose sediment becomes mobile and moves around natural landscapes?

When a fluid such as wind or water flows over a natural landscape, loose sediment grains can become mobilized and transported by the flow.The surfaces of many different natural landscapes, including deserts, riverbeds, and ocean floors, consist of loose sediments. When a fluid flows over these surfaces, sediment grains can become mobilized and transported by the flow. For example, rivers transport sediments to the oceans, where currents can direct them to coastal areas, from where wind can blow them landwards.

Small grains can be suspended by the flow (e.g., atmospheric dust aerosols), whereas large grains travel along the surface and, in doing so, can alter its morphology, giving rise to landforms such as ripples, dunes, banks, and barrier islands. There is evidence for sediment transport and its associated landforms not only on Earth but also on other Solar System bodies, such as Mars and Saturn’s moon Titan.

Are the physical processes that move sediment via wind and by water similar or different?

Both. In many ways, the physical processes are very much the same. A turbulent flow of a Newtonian fluid, described by the Navier-Stokes equations, interacts with granular particles, which form a bed under gravity and interact with each other following the laws of contact mechanics.

However, the phenomenology of sediment transport by wind and water are very different. Wind-blown sand grains tend to move comparably quickly and in characteristic large ballistic hops. Their impacts on the bed are more violent and often eject new bed grains into motion. In contrast, sand and gravel grains driven by a water stream tend to move more slowly in small hops or via sliding and rolling along the bed surface. Though their impacts are less violent, they are still able to drag, but usually not eject, bed particles out of their pockets.

What have recent developments in sediment transport experiments revealed?

Creeping occurs even for seemingly arbitrarily slow viscous flows, much below the threshold of visible sediment transport (“bed load”). Credit: Houssais et al. [2015] (CC BY 4.0)One important recent experimental discovery is that seemingly any fluid flow leads to sediment transport.

Even arbitrarily slow viscous flows will eventually irreversibly change the locations of grains within the bed and at the bed surface (“creeping”), although this may take a very long time.

Furthermore, recent experiments have fundamentally changed our understanding of how grains resting on the bed surface become mobilized.

For example, the traditional viewpoint that one primarily must understand the flow forces or flow-induced torques acting on bed surface grains has been seriously challenged. Instead, it appears that the destabilizing effects of transported particles’ impact on bed surface grains are very important, even for water-driven sediment transport where these effects have typically been neglected. It has also been recently found that one must also account for turbulent eddies and the time they take to pass by a bed grain.

How have computer simulations helped scientists to better understand sediment transport processes?

Computer simulations allow studying sediment transport at the grain scale.Computer simulations allow studying sediment transport at the grain scale with direct access to most physical details. They thus have been used to understand how grain scale dynamics translate into an average continuum behavior, like the link between the mobilization of individual bed grains and the stability of the bed surface as a whole.

Computer simulations also allow systematic exploration of sediment transport across a large range of environmental conditions.Computer simulations also allow systematic exploration of sediment transport across a large range of environmental conditions.

This has led to a much-improved understanding of the average threshold flow condition associated with sustained sediment motion.

First, computer simulations revealed that this “cessation threshold” seems to partially control many important statistical properties of continuous sediment transport, including its rate. Second, guided by computer simulations, general equations were derived that agree with cessation threshold measurements across sediment transport in oil, water, and air.

What are some of the unresolved questions where additional research, data, or modeling is needed?

Much work needs to be done to better understand the initiation of sediment transport.One of the most important open problems is: why do arbitrarily slow flows cause an irreversible, albeit very slow, grain motion, even in the absence of turbulence?

Much work also needs to be done to better understand the initiation of sediment transport, since our current knowledge suffers from two major problems.

First, for water-driven sediment transport, researchers using traditional experimental methods have often conflated multiple critical flow conditions. For example, the critical condition associated with sustained sediment motion rather than that which is associated with sediment transport initiation was measured, partially because these critical conditions were historically assumed to be the same.

Second, for wind-driven sediment transport, researchers traditionally used laboratory measurements of sediment transport initiation to draw conclusions about natural conditions. However, this practice is highly problematic as natural atmospheres have much larger boundary layers, allowing for stronger wind gusts at a given average speed.

—Thomas Pähtz, (tpaehtz@gmail.com;  0000-0003-4866-3017), Zhejiang University, China; Abram H. Clark ( 0000-0001-7072-4272), Naval Postgraduate School, USA; Manousos Valyrakis ( 0000-0001-9245-8817), University of Glasgow, UK; Orencio Durán ( 0000-0001-8459-582X), Texas A&M University, USA

A Plate Boundary Emerges Between India and Australia

Mon, 05/18/2020 - 12:19

Tectonic plates blanket the Earth like a patchwork quilt. Now, researchers think they’ve found a new plate boundary—a line of stitching in that tectonic quilt—in the northern Indian Ocean. This discovery, made using bathymetric and seismic data, supports the hypothesis that the India-Australia-Capricorn plate is breaking apart, the team suggests.

Earthquakes in Unexpected Places

In 2012, two enormous earthquakes occurred near Indonesia. But these massive temblors—magnitudes 8.6 and 8.2—weren’t associated with the region’s notorious Andaman-Sumatra subduction zone. Instead, they struck within the India-Australia-Capricorn plate, which made them unusual because most earthquakes occur at plate boundaries.

These earthquakes “reactivated the debate” about the India-Australia-Capricorn plate, said Aurélie Coudurier-Curveur, a geoscientist at the Institute of Earth Physics of Paris.

Some scientists have proposed that this plate, which underlies most of the Indian Ocean, is breaking apart. That’s not a wholly unexpected phenomenon because this plate is being tugged in multiple directions, said Coudurier-Curveur. Its eastern extent is sliding under the Sunda plate, but its northern portion is buckling up against the Himalayas, which are acting like a backstop.

“There’s a velocity difference that is potentially increasing,” said Coudurier-Curveur, who completed this work while at the Earth Observatory of Singapore at Nanyang Technological University.

Zooming in on Fractures

Coudurier-Curveur and her colleagues studied one particularly fracture riddled region of the India-Australia-Capricorn plate near the Andaman-Sumatra subduction zone. They used seismic reflection imaging and multibeam bathymetry, which involve bouncing sound waves off sediments and measuring the returning signals, to look for structures at and below seafloor consistent with an active fault.

“It might be even longer [than 1,000 kilometers], but we don’t have the data to show where it extends.”Along one giant crack that the team dubbed F6a, Coudurier-Curveur and her colleagues found 60 pull-apart basins, characteristic depressions that can form along strike-slip plate boundaries. The team showed that the basins followed a long, linear track that passed near the epicenters of both of the 2012 earthquakes.

“It’s at least 1,000 kilometers,” said Coudurier-Curveur. “It might be even longer, but we don’t have the data to show where it extends.” This feature, the team surmised, was consistent with being a plate boundary. An important next step was to estimate its slip rate.

Slower Than San Andreas

To do that, the scientists relied on two quantities: the length of the largest, and presumably oldest, pull-apart basin (roughly 5,800 meters) and the duration of the most recent episode of fault activity (roughly 2.3 million years). By dividing the length of the pull-apart basin by this time interval, they calculated a maximum slip rate of about 2.5 millimeters per year. That’s roughly tenfold slower than the rate along other strike-slip plate boundaries like the San Andreas Fault but not much slower than the slip rates of the Dead Sea Fault and the Owen Fracture Zone, the team noted.

“What we see in this region in the middle of the ocean is very analogous to other plate boundary regions.”On the basis of that slip rate, Coudurier-Curveur and her collaborators estimated the return interval for an earthquake like the magnitude 8.6 one reported in April 2012. Assuming that such an event releases several tens of meters of coseismic slip, a similar earthquake might occur every 20,000 years or so, said Coudurier-Curveur. “Once you release the stress, you need a number of years to build that stress again.”

These results were published in March in Geophysical Research Letters.

These findings are convincing, said Kevin Kwong, a geophysicist at the University of Washington in Seattle not involved in the research. “What we see in this region in the middle of the ocean is very analogous to other plate boundary regions.”

But continuing to monitor this part of the seafloor for earthquakes is also important, he said, because temblors illustrate plate boundaries. That work will require new instrumentation, said Kwong. “We don’t have the seismic stations nearby.”

—Katherine Kornei (@KatherineKornei), Science Writer

Long Live the Laurentian Great Lakes

Mon, 05/18/2020 - 12:17

From some vantage points, the Great Lakes feel more like vast inland seas than freshwater lakes. But the 6 quadrillion gallons (~23 quadrillion liters) sloshing in Superior, Michigan, Huron, Ontario, and Erie represent one fifth of the planet’s fresh water.

How long will the Great Lakes retain their honor of being the largest collection of freshwater lakes on the planet? Thanks to some ancient rifting events, possibly for a very long time to come.

Ancient Rifts and Ice Age Lakes

A billion years is a very long time, even by geologic standards.The story of the Great Lakes began over 1 billion years ago, when the ancient supercontinent Laurentia began splitting in half. Over the course of about 10 million years, the Midcontinent Rift System opened a massive fissure on its way to becoming a new ocean basin. But for reasons geologists don’t entirely understand, the rift failed, and no ocean was formed, leaving a 3,000-kilometer-long scar across what is now North America.

Eons of erosion have hidden this scar, which runs from Lake Superior in two forks down to Alabama and Oklahoma. “A billion years is a very long time, even by geologic standards,” said Seth Stein, a geophysicist at Northwestern University in Evanston, Ill.

Evidence of the rift is visible at Lake Superior, which is ringed by cliffs of billion-year-old basalt that erupted into the active rift, giving the largest lake a uniquely rugged shoreline. “Lake Superior looks very different from the rest of the Midwest,” where volcanic rocks are rare, Stein said.

Lake Superior’s towering basalt cliffs stand in contrast to the sandy beaches that ring the other Great Lakes. Credit: Seth and Carol Stein

Lake Superior sits within the Midcontinent Rift scar, but all five Great Lake basins were carved out by glaciers during the last glacial period, when the region was buried under the Laurentide Ice Sheet. As the ice age came to a close, the glaciers retreated, and the lake basins filled with meltwater to form the five Great Lakes as we know them by around 10,000 years ago. Today, some scientists refer to the Great Lakes by their more formal name, the Laurentian Great Lakes.

A smoky fire did not deter the hordes of summertime mosquitoes that plagued our backpacking trip on the south shore of Lake Superior. Credit: Mary Caperton Morton

A few summers ago, I drove hundreds of kilometers out of my way to visit Superior’s billion-year-old rocks on a backpacking trip through Porcupine Mountains Wilderness State Park, Mich., on the south shore of Lake Superior. Unfortunately, the pilgrimage was rushed. The moment we parked at the trailhead, clouds of mosquitoes swarmed the car. It was mid-June, and the bloodsuckers were so relentless—biting through bug spray, layers of clothes, a smoky fire, and a tent—that we cut our 4-day loop in half.

Next time I visit, I’ll avoid peak mosquito season. The Great Lakes offer a lifetime’s worth of hiking opportunities: I have my eye on the Superior Hiking Trail, a 499-kilometer-long, 2- to 4-week backpacking trip along the northwest shore of Lake Superior in Minnesota. Isle Royale, a large island in Lake Superior, “offers arguably the best backpacking in the Midwest,” Stein said.

More Water, More Problems

The Great Lakes have been a nexus of migration, transportation, fishing, and trade for thousands of years, with a series of lakes, rivers, and waterways connecting the upper Midwest to the Atlantic Ocean. Records started in 1918 show lake levels fluctuating seasonally, with the annual water budget controlled by inputs from rivers and outputs to the Atlantic Ocean and evaporation.

In 2014, after years of drought and higher temperatures triggered increased evaporation, water levels in the Great Lakes reached all-time lows. “For a long time, the dominant climate change narrative was that lake levels were going to drop drastically,” said Richard Rood, a climate scientist at the University of Michigan in Ann Arbor. “That narrative ended abruptly in 2015,” when lake levels started rising because of heavy rainfall and flooding across the Midwest.

By 2019, lake levels had drastically reversed course, exceeding their highest recorded levels. High water in the Great Lakes leads to shoreline flooding and erosion, which in turn threaten homes, industrial buildings, and port infrastructure.

“Now the narrative is that we will need to be able to cope with both highs and lows [in lake levels],” with those extremes persisting for longer time periods, perhaps several seasons in a row, Rood said.

“Compared to some places, like south Florida, the Great Lakes are going to look like a climate winner. We need to be prepared for how those migrations will change our communities and water usage in the coming decades.”Higher influxes of water into the Great Lakes also bring more pollutants, including nitrogen and phosphorus from agricultural runoff and Escherichia coli bacteria from overworked water treatment systems. Algae blooms and toxic water quality issues have increased.

The sheer size of the individual Great Lakes means that pollutants can stay in the system for a long time: A water droplet or molecule of pollutant will reside in Lake Superior for as long as 191 years, Lake Michigan for 99 years, and Lake Huron for 22 years, whereas the smaller Lakes Ontario and Erie have residence times of 6 and 2.6 years, respectively.

“Smaller lakes with shorter residence times can respond more quickly to changes than bigger lakes,” said Dale Robertson, a research hydrologist at the U.S. Geological Survey in Middleton, Wis. That means there’s hope for Lake Erie, where toxic algal blooms feeding off nutrients delivered by the Maumee River create water quality issues each summer. But larger lakes, like larger boats, cannot change course as quickly. It may take decades before we know how the larger lakes will respond to climatic changes, Robertson said.

Currently, around 35 million people rely on the Great Lakes for drinking water, and an additional 56 billion gallons are extracted each day for municipal, agricultural, and industrial use.

Those staggering numbers are likely to increase, Rood said, as people displaced from other regions of the country by climate-related impacts seek refuge in the Great Lakes Megalopolis.

“Compared to some places, like south Florida, the Great Lakes are going to look like a climate winner,” Rood said. “We need to be prepared for how those migrations will change our communities and water usage in the coming decades.”

Holland Beach, Mich., on Lake Michigan is one of many sandy swimming beaches around the Great Lakes. Credit: Mary Caperton Morton Long Live the Great Lakes

The long-term trajectory of large freshwater lakes often depends on their outlet: Endorheic or terminal lakes such as the Great Salt Lake have no exit to the ocean, and the majority of their water balance is lost through evaporation, making them saltier over time.

The Great Lakes are not an endorheic basin; the outlet that connects them to the Atlantic Ocean has a 535-million-year-old history that’s unlikely to peter out anytime soon. The Saint Lawrence Seaway, along with the basins that hold Lakes Erie and Ontario, sits inside the Saint Lawrence Rift, a 1,000-kilometer-long scar that dates back to the opening of the Iapetus Ocean between the paleocontinents of Laurentia, Baltica, and Avalonia.

Unlike the long dead and buried Midcontinent Rift System, the Saint Lawrence Rift System is still seismically active, capable of generating earthquakes over magnitude 5. As long as water flows into the Great Lakes Basin and out through the Saint Lawrence Seaway, the Great Lakes are likely to retain their status as the world’s largest—and hopefully freshest—lakes for eons to come.

—Mary Caperton Morton (@theblondecoyote), Science Writer

Living in Geologic Time is a series of personal accounts that highlight the past, present, and future of famous landmarks on geologic timescales.

How Much Modification Can Earth’s Water Cycle Handle?

Fri, 05/15/2020 - 11:26

Earth’s fresh water is essential: It helps regulate climate, support ecosystems, and sustain human activities. Despite its importance to the planet, humans are modifying fresh water at an unprecedented scale, and by damming rivers, pumping groundwater, and removing forests, humans represent the primary source of disturbances in the world’s freshwater cycle.

Extensive modifications to the water cycle and perturbations to other Earth processes prompted the development of the planetary boundaries framework roughly a decade ago. The framework defines the “safe operating space” for essential Earth system processes, including fresh water, and sets limits beyond which we risk rapidly and maybe irreversibly disrupting critical global systems.

In a new study, Gleeson et al. argue that the current definition of the planetary boundary related to fresh water—and the methodology for determining it—is insufficient, neither capturing water’s role in Earth system resilience adequately nor offering a way to measure perturbations on a global scale. The authors provide an overview of how water supports Earth’s resilience and propose an approach for analyzing and better understanding global water cycle modifications focused on three central questions: What water-related changes could lead to global tipping points? How and where is the water cycle particularly vulnerable? And how do local changes in water stores and fluxes affect regional and global processes and vice versa?

The authors delve into each of these questions. In addressing the second question, for example, they examine where ecological regime shifts would alter the water cycle. As a case study, they assess the Amazon rain forest. Research has suggested that if the Amazon loses between 10% and 40% of its current forested extent, it may transition permanently into savanna and disrupt water and carbon cycles. By understanding the limits of the Amazonian ecosystem—and other ecosystems around the world—we can shape policy to avoid crossing tipping points that would undermine the planet’s resilience.

The study invites scientists to meet the scientific and ethical grand challenge of examining how water cycle modifications affect Earth system resilience. In outlining the grand challenge, the authors suggest initiatives that can be addressed immediately by collaborative working groups to illuminate better the current extent of water modifications and the state of the water cycle. (Water Resources Research, https://doi.org/10.1029/2019WR024957, 2020)

—Aaron Sidder, Science Writer

This Week: Social Distancing at Sea, Climate Migration on Land

Fri, 05/15/2020 - 11:25

What It’s Like to Social Distance at Sea. This is a fascinating glimpse into life on board the only oceangoing research ship to sail this spring: a 24-scientist team reduced to just three researchers, empty berths, empty chairs at empty tables, and—even at sea—Zoom meetings. Data are being collected, but we’re still losing something very valuable: “With limited people allowed on board, how will graduate students receive field training?” —Kimberly Cartier, Staff Writer

 

The Geosciences Community Needs to Be More Diverse and Inclusive. “Almost 90 percent of geoscience doctoral degrees in the United States are awarded to people who are white,” but geoscience issues broadly affect communities around the world. We’re not going to be able to fix those problems unless we can fix the diversity problem in these fields. AGU president Robin Bell and chair of the Diversity and Inclusion Advisory Committee (and Eos Science Adviser) Lisa White offer some paths to solutions in this opinion for Scientific American. —Heather Goss, Editor in Chief

 

What Data Do Cities Like Orlando Need to Prepare for Climate Migrants? In 2017, the extreme flooding and damage caused by Hurricanes Irma and Maria sent a huge wave of evacuees—most from coastal Florida and Puerto Rico—to communities in Florida’s interior. In Orlando, the influx of evacuees “gave city managers a glimpse into a future for which they need to prepare.” This future, in which massive numbers of people will seek new homes because of hazards exacerbated by climate change like increased flooding or extreme heat, will likely confront many cities and towns in the United States and around the world in the coming years and decades. This excellent piece offers a window on the difficult experience of evacuees in 2017 and delves into how cities and scientists are trying to adapt to and prepare for climate-related migrations. —Timothy Oleson, Science Editor

 

Milan Announces Ambitious Scheme to Reduce Car Use After Lockdown.

The bustling streets of Milan, Italy, are unusually empty due to the coronavirus lockdown. Many of Milan’s streets will remain closed to motor vehicles when the city reopens to traffic. Credit: Unsplash/Mick De Paola

Italy’s northern city of Milan is in one of Europe’s most polluted regions, but since the coronavirus lockdown, air pollution has plummeted, thanks to fewer cars on the roads. The city hopes to keep it that way: Officials unveiled a bold new plan in April to reimagine their roadways. They’ll transform 22 miles of streets into walking and cycling spaces over the summer. The move will give more room for pedestrians and cyclists, and as journalist Laura Laker notes, the rest of the world will be watching to see how Milan’s makeover goes. Just last week, Seattle followed suit. —Jenessa Duncombe, Staff Writer

 

The True Story of the White Island Eruption. Like the best Outside writing, this spectacular coverage of New Zealand’s volcanic tragedy is both thrilling and measured. I don’t know how I missed it in April, but the account is no less relevant and fascinating today. —Caryl-Sue, Managing Editor

Shrinking Ice Sheets Lifted Global Sea Level 14 Millimeters

Fri, 05/15/2020 - 11:24

Using satellite data stretching back more than a decade, scientists have found that the loss of ice in Antarctica and Greenland accounts for a 14-millimeter elevation in sea levels since 2003. The data provide the most accurate and granular picture of ice loss so far, a crucial element for predictions of how high seas will rise as the climate changes.

Using data from the mission of Ice, Cloud, and Land Elevation Satellite 2 (ICESat-2), the new report looked at only ice surface height to measure ice loss. Reporting in Science, researchers at the University of Washington, NASA, and other centers describe how they used data from the original ICESat, gathered from 2003 to 2009, along with measurements taken by its successor in 2018 and 2019. They believe the long observational time frame supports the idea that ice changes are related to long-term climate changes.

Unprecedented Accuracy and Precision

ICESat-2 is a sophisticated piece of hardware. It fires 10,000 laser pulses a second at ice masses to measure their changing height over time. It carries a single instrument, the Advanced Topographic Laser Altimeter System (ATLAS), that bombards Earth with laser pulses, each consisting of roughly 300 trillion photons. Because of reflection and scattering, only about a dozen of these photons will reflect all the way back up to a telescope aboard ICESat-2 as it orbits 500 kilometers above our world.

“With the unprecedented accuracy and precision of this novel measurement system, we are now able to detect the small signals far into the ice sheet interior, as well as map out the changes over narrow glaciers on high-slope terrain around the ice sheets,” said study coauthor Fernando S. Paolo of NASA’s Jet Propulsion Laboratory. “Previous altimeters struggled with these challenges. Because we used laser measurements over a fairly long time span (about 16 years), we are getting the overall trends of ice sheet mass loss with higher confidence.”

“The amount of ice lost from both ice sheets is equivalent to a layer of water half an inch thick on top of all the oceans or to fill around 13 billion swimming pools.”The researchers took data from ICESat and overlaid them with findings from ICESat-2, accounting for snow density and other factors, to determine how much ice had been lost. They also developed a new model for firn (snow that is transitioning to ice on the top of ice sheets) to estimate how ice sheet air content changes over time.

“This is important for knowing how much of the change we observe is due to gain or loss of ice and how much is due to change in the density of the top of the ice sheet,” said Benjamin Smith, a glaciologist at the University of Washington and lead author of the study.

The team determined that Greenland’s ice sheet lost an average of 200 gigatons of ice per year, and Antarctica’s ice sheet lost an average of 118 gigatons of ice per year.

“The amount of ice lost from both ice sheets is equivalent to a layer of water half an inch [1.27 centimeters] thick on top of all the oceans or to fill around 13 billion swimming pools,” Smith and Paolo said. “It’s also about the volume of Lake Michigan, or enough to cover the continental United States to a depth of about 2 feet [about 0.6 meter].”

Interior Ice Gains Offset

The study considered both grounded ice and floating ice for the same time period, using the same data and methods to get estimates of total mass change. Assessing both ice masses enabled the researchers to make links between where the ice shelves are losing mass and where the grounded ice is responding. Although loss of floating ice does not directly contribute to sea level, it does further accelerate the loss of grounded ice.

“This is because ice shelves constitute a natural ‘barrier’ or ‘buffer’ to grounded ice flowing into the ocean via the glaciers, holding it back—this is a process we refer to as ‘buttressing’ because it is like an architectural buttress that holds up a cathedral,” said study coauthor Helen Amanda Fricker, a glaciologist at Scripps Institution of Oceanography. “As the ice shelves thin, they are less able to hold the grounded ice back. The glaciers then flow faster towards the ocean, and they thin.”

Patterns of thinning over the ice shelves in West Antarctica show that the Thwaites and Crosson ice shelves have thinned the most, at an average of about 5 and 3 meters of ice per year, respectively, Fricker added.

This map shows the amount of ice gained or lost by Greenland between 2003 and 2019. Dark reds and purples show large rates of ice loss near the coasts. Blues show smaller rates of ice gain in the interior of the ice sheet. Credit: Smith et al., 2020, https://doi.org/10.1126/science.aaz5845

The scientists also noted ice height increasing in the interior of ice sheets over the observation period; ice is not flowing out of the interior of the ice sheets as fast as snow is falling. This phenomenon could be due to a relatively small increase in snowfall in some regions as a consequence of global warming changing atmospheric circulation, which would be expected on the basis of atmospheric models, Smith and Paolo said, adding that the interior mass increase is much smaller than the mass lost around the peripheries of the ice sheets.

“This is a very significant study, a big step forward in both the quality of data and the confidence in that observed change occurring on both the grounded and the floating ice for Greenland and Antarctica,” said David Holland, director of the Center for Sea Level Change at New York University Abu Dhabi; he was not involved in the study. “This study brings together the observed changes of both ice types in a single data analysis package and looks to me like the new gold standard for observed change of the great ice sheets. This high-quality data and data analysis are critical to improving projections of future global sea level.”

The group is now looking to map out trends in ice sheet mass using only data from ICESat-2, starting in 2018.

“With more measurements coming from ICESat-2 we will be able to construct time series of ice sheet change with unprecedented quality and to start understanding the details of why these changes are happening,” said Paolo. “We are also mapping out all the other challenging regions around the world, such as the glaciers in Alaska, the Andes, and the Himalayas.”

—Tim Hornyak (@robotopia), Science Writer

No Mask? You May Not Worry About Climate Change, Either

Thu, 05/14/2020 - 20:33

According to a recent poll by the technology company Morning Consult, the decision to wear a mask in the United States correlates with an individual’s concern about climate change.

The CDC recommends wearing masks in public places where social distancing may be difficult.In an online survey of 2,200 adults on 14–16 April, a little over half of respondents who said climate change was a concern and caused by human activity said they “always” wear masks in public. In contrast, a little under a third of respondents “not too concerned” or “not concerned at all” with climate change said the same. There was a 24–percentage point difference between the two (54% versus 30%).

Vice President Mike Pence and President Donald Trump have repeatedly made headlines for bypassing masks in public, although Pence later said he should have worn one when visiting the Mayo Clinic in April. The Centers for Disease Control and Prevention (CDC) recommends wearing masks in public places where social distancing may be difficult, like grocery stores and pharmacies. The masks help reduce the spread of coronavirus disease 2019 (COVID-19).

The poll reveals how a person’s concern about one public health crisis—climate change—is related to their behavior during another. The coronavirus pandemic has claimed more than 85,000 lives in the United States and over 300,000 worldwide, according to Johns Hopkins University. Individual beliefs about science and personal choices inform behavior during the pandemic, according to experts interviewed by Morning Consult.

Face-to-Face

Wearing a mask in public could prevent the transmission of the virus by blocking aerosols and droplets from an infected person’s mouth and nose. People without symptoms can still spread the virus through their saliva and mucus through coughing, sneezing, or breathing.

Some view ordinances to wear masks as affronts on their personal liberties. A person’s opposition to expert advice, skepticism of science, and personal autonomy concerns may explain the difference.

“For a lot of people, the coronavirus is invisible, just like climate change is invisible.”“Everything that science asks us to do is really sacrificing personal convenience for community convenience and well-being,” Emma Frances Bloomfield, an assistant professor in communication studies at the University of Nevada, Las Vegas, told Morning Consult. “And for a lot of people, the coronavirus is invisible, just like climate change is invisible. A lot of people don’t know people who have been directly affected, and in the case of climate change, a lot of the more severe effects are still years away.”

Respondents in the poll were more in agreement about other preventative measures. Of all adults polled, 78% said they were following social distancing guidelines during the previous month. Climate-unconcerned individuals were only slightly less likely (72%) than climate-concerned respondents (86%). The two groups also found some common ground in cleaning practices: 50% of climate-unconcerned individuals reported cleaning and disinfecting home surfaces and technology, compared with 61% of climate-concerned individuals.

The poll had a margin of error of two percentage points for all respondents and three to four percentage points for the climate-concerned and -unconcerned groups.

Making a Masquerade of Science

For decades, fossil fuel companies spent hundreds of millions of dollars to spread misinformation about climate change, and this undermined public confidence in science, said Ed Maibach, director of the Center for Climate Change Communication at George Mason University.

“It’s a really unfortunate consequence of that disinformation campaign that’s been playing out so long.”Maibach told E&E News, “it’s a really unfortunate consequence of that disinformation campaign that’s been playing out so long.…And with regard to the COVID-19 pandemic, it’s really important that we listen to what the public health experts have to say about this because lives are on the line.”

Liberal Democrats are more likely to be concerned about climate change than conservative Republicans, noted Maibach. Sixty-five percent of the climate-unconcerned respondents identified as conservative, and 36% identified as Baby Boomers. Bloomfield said that conservatives and older generations are less willing to make personal sacrifices based on their ideologies, which go against both environmentalism and certain COVID-19 protective measures.

The CDC did not recommend that the public wear masks until early April. The change came when the CDC noted studies that showed people without symptoms could still spread the virus. The Morning Consult poll was conducted in mid-April, so opinions and behaviors may have shifted since.

North Carolina State University assistant professor Louie Rivers sees the poll as evidence of agreement across party lines. Rivers, who focuses on the examination of risk and judgment and decision process in minority and marginalized communities, pointed out to Inside Climate News that the majority of respondents said they were social distancing.

“To me, that’s hopeful,” Rivers said. “A lot of Americans, they’re losing their jobs, they’re having problems putting food on the table, and yet they’re staying at home and sharing the sacrifice.”

—Jenessa Duncombe (@jrdscience), Staff Writer

A Revised View of Australia’s Future Climate

Thu, 05/14/2020 - 11:35

On the heels of an unprecedented wildfire season, climate is yet again a hot topic in Australia. In a new study, researchers examine the performance and projections of the latest generation of global climate models for the Australian continent.

Efforts to understand how climate change will unfold under various emissions scenarios rely on the sophisticated computer models of the Coupled Model Intercomparison Project (CMIP), which combines dozens of models to create as complete a picture of Earth’s climate as possible. The recently released sixth phase of CMIP incorporates several new features, including a more diverse set of emissions scenarios that correspond to possible future socioeconomic changes in the world. It also allows researchers to compare results from the new models with those from previous CMIP generations, addressing whether the models simulate the current climate any better and whether they give new insights about the future.

In the new study, Grose et al. conclude that the CMIP6 models do improve on CMIP5 in incremental but important ways. The models more accurately capture the impact of large-scale climate drivers on rainfall, represent dynamic sea level, and simulate extreme heat events in the atmosphere and in the surrounding ocean.

The researchers write that the projections of future conditions broadly agree with the data from CMIP5, thereby increasing confidence in most aspects of the existing projections. However, although both generations of models project further warming in the coming century, the upper range of the CMIP6 predictions beyond 2050 is higher than in CMIP5 models, meaning a worst-case scenario could be even worse than previously thought. The scientists say that these higher values in some CMIP6 models arise largely because the models have higher “climate sensitivity” to greenhouse gas increases. If these models are, in fact, the most credible, then limiting global warming to less than 2°C—the goal established in the Paris Agreement—will require larger emissions reductions than previously thought. (Earth’s Future, https://doi.org/10.1029/2019EF001469, 2020)

—David Shultz, Science Writer

Asia’s Mega Rivers: Common Source, Diverse Fates

Thu, 05/14/2020 - 11:33

Asia’s rivers affect more than half of the world’s population, providing water for cities, agriculture, transportation, and power generation and contributing to flooding and landslide hazards. These rivers also play important roles in many physical and biogeochemical processes on Earth’s surface, shaping the landscape and conveying huge quantities of water, sediment, and dissolved constituents to marginal seas—the regions that separate coastal zones from the open ocean.

“Mega rivers” in Asia—considered here to be those with historical annual sediment discharge of about 100 megatons per year or greater—share a common source in the Himalaya and Tibetan Plateau region (Figure 1), but a wide range of human and natural factors influences their fates. In many of these long, large river systems and their receiving basins, human activity (e.g., dam construction, agriculture, river management, trawling) has dramatically changed the length of time that particulate and dissolved matter spends in the river on its journey to the ocean.

Fig. 1. Approximate current and historical annual sediment discharges (in metric megatons per year, Mt yr-1) of Asia’s mega rivers, defined as those with historical annual sediment discharge of about 100 megatons per year or greater.

.

Of particular concern in these rivers is human alteration of the transit and sequestration of water, sediment, and bioactive elements like carbon, iron, sulfur, phosphorus, nitrogen, potassium, and silicon. Anthropogenic changes to these fluxes affect the sustainability of deltas and coastal oceans, in turn imposing profound consequences on society that have only recently begun to be appreciated. We must understand the ongoing changes in fluxes and fates for river-derived materials to understand current impacts and predict future trends concerning such socially relevant issues as the global carbon budget and ocean acidification, eutrophication, pollution, and coastal erosion.

In October 2019, scientists gathered in Xiamen, China, for a workshop on the Asian sedimentary continuum (ASC). The term sedimentary continuum, in this context, is intended to reflect the myriad transport pathways of solids and dissolved matter associated with rivers, from the upland source regions through tidal rivers to the ocean’s receiving basins. The workshop participants reached a consensus on the need to develop an international collaborative program to explore how human-induced landscape and seascape alterations and climate change affect the sedimentary continuum and the cycling of bioactive elements from mountains to the deep sea along major Asian rivers.

Different Paths from the Plateau

Asian mega rivers provide opportunities to study rivers that have common origins but vastly different downstream environments.The Asian mega rivers flow through an enormous range of geologic, climatic, and land use conditions, providing opportunities to study rivers that have common origins but vastly different downstream environments. Inputs of water, along with mineral and organic matter from the Tibetan Plateau (sometimes called “Asia’s water tower”) and the nearby Himalaya, spawn the mega river systems that together convey approximately 25% of the world’s riverine sediment flux to marginal seas.

The catchments and receiving basins of these rivers have distinctive physical settings, and humans influence them in vastly different ways. For example, sediment fluxes through many of these rivers have whipsawed because of human activity during the Holocene. Early agricultural and other land use practices resulted in a large increase in sediments in the rivers. Then, beginning in the 1950s, accelerated dam construction caused a dramatic decrease in sediment loads. This latter trend has been particularly acute for Chinese rivers, where sediment supplies have declined to about 20% of predam levels during the past half century. During this period, about 250 “mega dams” and many smaller dams were constructed across Asia [Gupta et al., 2012].

In contrast, the large rivers that drain to the ocean via Bangladesh and Myanmar have not yet seen such dramatic decreases—the Ayeyarwady (Irrawaddy) and the Thanlwin (Salween), both of which flow through Myanmar, are the last remaining free-flowing rivers in Asia [Grill et al., 2019].

Despite the strong societal relevance, temporal and spatial variations in the compositions and concentrations of particulate and dissolved matter in these rivers are poorly known. Likewise, the effects of nutrients and pollutants supplied from Asian rivers to the coastal ocean are not thoroughly characterized. Such effects include hypoxia and acidification, especially under rapid climate change and enhanced human activities. Contrasting the journeys of materials through two or more Asian rivers with a common origin would allow scientists to explore major controls on sediment fate and bioactive element cycling along their sedimentary continuums.

Reading the Rivers’ Changing Signals

We need to understand the mechanisms by which sediment and bioactive constituents vary in space and time along the sedimentary continuum to determine economic and societal impacts.The transfer and cycling of dissolved and particulate material in rivers are complex and, in many systems, have been strongly perturbed by human activity [e.g., Syvitski and Kettner, 2011]. We need to understand the mechanisms by which sediment and associated bioactive constituents (the environmental signals) vary in space and time along the sedimentary continuum to determine economic and societal impacts related to current and future human perturbations.

Climatic and geologic influences like precipitation patterns and rock weathering significantly influence the particulate and dissolved loads entering rivers. Once these materials become part of river flows, river systems process and preserve source input signals in different ways depending in part on prevailing climates, which can range from temperate to tropical, and on the temporary storage and release of materials along the sedimentary continuum, among many other factors. After the rivers enter the coastal oceans, sediment continues to be processed, and chemical exchanges occur as a function of oceanographic conditions and other human influences.

One important variable in understanding biogeochemical cycling of river-borne material is the residence time of sediment in different parts of the sedimentary continuum. For example, depositing sediment in natural river floodplains temporarily removes material from the transport system. On floodplains, this material is exposed to weathering and diagenetic (sedimentary rock–forming) conditions that may affect the breakdown of mineral and organic constituents and the release of solutes into groundwater. In a natural state, the same sediments once deposited onto a floodplain may be reentrained into the transport system as a river switches course. Thus, a mixture of new and “aged” material dictates the overall composition of solid and dissolved materials in rivers.

Even in offshore areas, human activities may play major roles in the exchange of bioactive elements between the seafloor and ocean waters above.As rivers and floodplains become extensively modified by human activities, the amount of time that sediments spend in the river and floodplain can be altered in ways that in turn disrupt natural sedimentary and biogeochemical cycles. Increased erosion and runoff from agricultural practices can decrease the time that sediments spend in a floodplain. In contrast, if human activities disconnect rivers from their floodplains, for example, with flood control structures, sediment residence times in floodplains can increase.

Similar examples of human influence can also be found in the marine portion of the sedimentary continuum. This area is often assumed to be mainly a sink where sediments are buried offshore, effectively isolating them from further exchange with overlying ocean waters. However, even in these offshore areas, human activities may play major roles in the exchange of bioactive elements between the seafloor and ocean waters above. For example, bottom trawling may affect nearly 100% of the continental shelf seafloor in heavily fished regions [Puig et al., 2012]. This activity churns the seabed, injecting oxygen-rich water into the sediment, dramatically changing the conditions for seafloor diagenesis and the flow of dissolved constituents into the coastal marine waters above, and ultimately altering chemical exchange between the sediments and the wider ocean.

Building an Integrated View

Over the past several decades, the geoscience community extensively investigated Earth surface processes along many Asian continental margins, but most studies have looked at individual components of the system. As a consequence of this compartmentalized approach, fundamental questions about the overall fluxes and fates of material in Asian mega rivers remain unanswered. Changes effected by human disturbances along rivers are exacerbated by climate change (including monsoon conditions) and sea level rise, and they represent clear and imminent threats to human health and economic development. Yet the future magnitude and trajectory of such impacts are largely unknown.

For example, the planned construction of major dams on the Ayeyarwady and Thanlwin Rivers, which flow into the northern Andaman Sea, likely will have major consequences for material transport, coastal erosion, and land loss. Like many densely populated major river deltas, the low-lying areas of the Ayeyarwady delta are considered to be under high threat from accelerated sea level rise. Although the Ayeyarwady and Thanlwin are currently free-flowing, even now they are not free of human influences. Sand mining in the rivers and deforestation of coastal mangroves are accelerating, potentially driving significant shoreline erosion and land loss.

Another example is the Yangtze River, considered one of the “mother rivers” of China, which extends thousands of kilometers from the Tibetan Plateau to the East China Sea. Inputs from this river into the sea mix with the large cumulative inputs from the small mountainous rivers of Taiwan delivered to the Taiwan Strait, which connects the East and South China Seas.

In contrast to large river systems in North America and many other parts of the world, which have generally experienced human influences for hundreds of years or less, humans have been altering sediment loads in Asian mega rivers for more than 1,000 years [Walling, 2011]. Today, human influences on the Yangtze are particularly acute, with many megacities (e.g., Chongqing, Shanghai), large dams (e.g., Three Gorges), and areas of dense industrialization lining its course and polluting its waters. The Yangtze supplies more plastic to the ocean than any other river [Lebreton et al., 2017] and has experienced dramatic climate, land use, and dam-induced fluctuations of its water and sediment fluxes [e.g., Yang et al., 2015].

Thus, a large-scale interdisciplinary program is needed to address key questions regarding the ways that human, climate, and sea level changes will affect public health and economic development in the future in such complex systems.

A Transformative Program

Future progress for the Asian mega river research community presents challenges as well as opportunities. A key to success will be the continued development of predictive models that incorporate climate change; landscape evolution; sea level change; and the storage, transfer, and transformation of materials through space and time. We also need to better understand the coupling between particle transport processes and the biogeochemical signals they carry from the terrestrial to the marine environment.

The proposed program would be transformative—it would foster research that transcends traditional disciplinary boundaries and considers the sedimentary continuum holistically.The program proposed at the October 2019 workshop would be transformative, in that it would foster research that transcends traditional disciplinary boundaries and considers the sedimentary continuum holistically. Scientific communities (representing, e.g., geology and biogeochemistry) have a rich opportunity to collaborate and provide a comprehensive understanding of sedimentary dynamics and bioactive element cycling by investigating residence times, transport pathways, and exchanges of sediment and associated organics.

Providing synoptic measurements along the entire sedimentary continuum and developing the ability to predict future changes, from mountains to the deep sea, will require innovations in several areas:

in the development of new satellite sensors, such as those aboard NASA’s Surface Water and Ocean Topography (SWOT) satellite in modeling methods that couple landscape and seascape evolution and biogeochemical and sediment transport modules in observational approaches such as cabled observatories, gliders, and benthic tripods in geochemical proxies using, for example, nontraditional stable or metal isotopes or novel biogeochemical markers

These approaches must be combined with insights gleaned from the sedimentary record to understand past conditions and with modeling approaches that allow us to predict future trends. In this way, a new understanding of the sedimentary continuum along Asia’s mega rivers through space and time can be applied to address health, environmental, and economic concerns of the future.

How Does Convection Work Over the Tropics?

Thu, 05/14/2020 - 11:30

One important topic in tropical meteorology is the geographical distribution of rainfall across the tropics, given that there are deserts, as well as tropical rainforests, which receive a very high amount of rainfall. Using an idealized theoretical framework for the tropical atmosphere, Zhang and Fueglistaler [2020] explain important features of the tropical atmosphere in observations and models. In particular, while precipitation has very different features over land and ocean, the mean moist-static energy in the subcloud layer has very similar distributions over land and over ocean in convective regions. This result is associated with a horizontally uniform free tropospheric temperature over the tropics.

This similarity of moist static energy over land and ocean is valid both using reanalysis fields and satellite observations to determine convective areas, as well as in CMIP5 model simulations under a variety of different climate scenarios. Therefore, this very idealized framework is very useful to understand observations and will helpful to determine how the tropics will respond to anthropogenic climate change.

Citation: Zhang, Y., & Fueglistaler, S. [2020]. How tropical convection couples high moist static energy over land and ocean. Geophysical Research Letters, 47, e2019GL086387. https://doi.org/10.1029/2019GL086387

—Suzana Camargo, Editor, Geophysical Research Letters

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer