Syndicate content
Earth & Space Science News
Updated: 5 hours 25 min ago

Effects of Variability in Atlantic Ocean Circulation

Tue, 07/09/2019 - 11:57

In the Atlantic Ocean, changes in the circulation of water are closely connected with multidecadal variations seen in ocean conditions and associated climate impacts. A recent paper in Reviews of Geophysics synthesizes modern observations, climate simulations, and paleo evidence to provide a comprehensive picture of our current understanding of this linkage, which is crucial for predicting future climate change and variability. This review paper is part of a special collection of journal articles which is jointly supported by the US AMOC Science Team and the UK RAPID program. Here, the authors of the paper give an overview of this topic and outline key challenges to be addressed in future research.

What is “Atlantic multidecadal variability” and what are the key characteristics?

Top: Time series of observed Atlantic multidecadal variability over the instrumental era. Bottom: Observed sea surface temperature pattern associated with AMV. Credit: Zhang et al. [2019], Figure 2.Atlantic multidecadal variability (AMV) refers to large scale, slow variations in Atlantic Ocean conditions relative to the signal associated with global mean changes.

Scientists have observed key characteristics that indicate this variability.

One is opposite sea surface temperature patterns seen over the North and South Atlantic.

Another is co-occurring multidecadal variations in surface temperature, salinity, and heat fluxes over the mid- to high- latitude North Atlantic.

Further characteristics that have been observed include opposite multidecadal variations between upper and deep subpolar North Atlantic temperatures, as well as between surface and subsurface tropical North Atlantic temperatures.

What is “Atlantic meridional overturning circulation” and how is it connected with AMV?

The Atlantic meridional overturning circulation (AMOC) comprises a northward flow of warm salty water in the upper layers of the Atlantic Ocean and a southward flow of cold fresh water at depth. This circulation pattern transports a huge amount of heat northwards.

Multidecadal variations in the amount of heat transported by the AMOC can cause sea surface temperature anomalies in the subpolar North Atlantic, which is one of the key characteristics of AMV.

Many observational and modeling studies are consistent with the idea that multidecadal variability in the AMOC acts as an instigator of the observed AMV.Coupled ocean-atmosphere interactions in response to AMOC-induced subpolar changes are important for these sea surface temperature anomalies to propagate from the subpolar North Atlantic into the tropical North Atlantic along a horseshoe-shaped pathway.

Many observational and modeling studies are consistent with the idea that multidecadal variability in the AMOC acts as an instigator of the observed AMV.

How do multidecadal AMOC variability and AMV impact climate?

Atlantic multidecadal variability has impacts on many regional and hemispheric-scale climate phenomena that have enormous societal and economic implications.Atlantic multidecadal variability has impacts on many regional and hemispheric-scale climate phenomena that have enormous societal and economic implications.

These include shifts in the Intertropical Convergence Zone; summer monsoon in the Sahel and India; hurricanes in the Atlantic; the El Niño Southern Oscillation; Pacific Decadal Variability; North Atlantic Oscillation; climate over Europe, North America, and Asia; sea ice and surface air temperature over the Arctic; and mean surface temperature in the Northern Hemisphere.

Recent observational and modeling evidence suggests that the AMOC plays an essential role in driving many of these climate impacts, particularly through variations in surface heat released from the ocean into the atmosphere over the mid- to high- latitude North Atlantic.

Can the relationship between AMOC and AMV be used to predict future climate changes and impacts?

The relationship between AMOC and AMV is very valuable to predict future climate changes and impacts on decadal timescales.The relationship is very valuable to predict future climate changes and impacts on decadal timescales. Climate models can successfully predict how given initial conditions of AMOC anomalies at northern high latitudes cause decadal shifts in various Atlantic conditions and associated climate impacts. The subpolar North Atlantic emerges as a key region for predicting the tropical signal.

What are some of the unresolved questions where additional research, data or modeling is needed?

Most state-of-the-art climate models underestimate the amplitude of multidecadal AMOC variability.Most state-of-the-art climate models underestimate the amplitude of multidecadal AMOC variability. This leads to an underestimation of the linkage between the AMOC and AMV and associated climate impacts. Thus, there are both serious challenges and great opportunities for making substantial improvements in our understanding of these phenomena and predicting their behavior and impacts.

It would be valuable to employ a hierarchy of models, expand the number of high-resolution paleo records, and maintain long-term instrumental observations in the future.

—Rong Zhang (email: Rong.Zhang@noaa.gov), National Oceanic and Atmospheric Administration; Rowan Sutton, University of Reading; Gokhan Danabasoglu, National Center for Atmospheric Research; Young-Oh Kwon, Woods Hole Oceanographic Institution; Robert Marsh, University of Southampton; Stephen G. Yeager, National Center for Atmospheric Research; Daniel E. Amrhein, University of Washington; and Christopher M. Little, Atmospheric and Environmental Research, Inc.

Fireballs Could Provide Clues to an Outstanding Meteor Mystery

Tue, 07/09/2019 - 11:54

Look for fireballs during nights of watching the sky over the next few weeks—and perhaps even during the day.

From the end of June until about mid-August, Earth will pass through the Taurid Complex, a collection of rock and dust linked to an increased number of upper atmosphere incendiaries. A new model predicts that Earth will be closer to the heart of the swarm than it has been in almost 45 years, providing not only a treat for sky watchers but also an opportunity to investigate the mystery of where these objects come from and whether Earth might suffer a Taurid-related impact in the future.

The new calculations suggest that Earth will pass within 10 million kilometers of the center of the swarm, a dense cluster of objects within the larger Taurid meteoroid stream.

“It gives us a great opportunity to view it,” says David Clark, a graduate student at Western University in London, Ontario, Canada. Clark is lead author on the new research, which was published in Monthly Notices of the Royal Astronomical Society. After analyzing several visible fireballs during the 2015 passage, the astronomers modeled the motion of the swarm’s core.

Sky watchers in Earth’s Southern Hemisphere will catch a glimpse between 5 and 11 July, whereas those in both hemispheres will get a peek between 21 July and 10 August. Watchers will catch another glimpse of the meteor shower in late August, but at that time conditions won’t be at their peak for astronomers to study the swarm’s dimmest objects.

Determining how many of these smaller objects travel in the swarm could help researchers understand a potential threat to our planet.By observing the quantity of 100-meter near-Earth objects (NEOs) within the swarm, Clark and his colleagues hope to constrain the number of potentially dangerous objects that could one day collide with Earth. Current estimates by NASA’s Center for Near Earth Object Studies put the odds of a 100-meter impactor hitting the planet at once every 10,000 years. But if the Taurid stream contains a cluster of objects that Earth passes through periodically, the threat could increase, Clark says. Determining how many of these smaller objects travel in the swarm could help researchers understand a potential threat to our planet.

Mysterious Origins

The Taurid meteor shower has been linked to an increase in fireballs in 1995 and 2005, as well as to more than 100 in 2015. It has also been connected to the Tunguska event of 1908, the largest impact event in recorded history, in which a 60- to 190-meter meteorite exploded in the air over Russia and flattened 2,000 square kilometers of forest.

In 1908, a meteor explosion over central Russia flattened more than 2,000 square kilometers of forest. The meteorite that caused the explosion has been linked to the Taurid meteor shower. Credit: Western University

Although most meteor showers come from debris left behind from comets, the Taurids’ source remains a puzzle. The showers, which occur twice a year over a span of several weeks, have been connected to comet 2P/Encke, but the comet today is too weak to produce enough material to fuel them. In the past, scientists have speculated that a massive comet, as large as 5 kilometers across, was destroyed through interactions with the Sun. Encke and its smaller siblings may be all that remain of the giant.

In the past, two independent teams of researchers have analyzed the makeup of NEOs associated with the Taurids. Although both found that the composition of their targets was predominantly primordial, the larger mystery remains.

“The comet is the odd one out,” says Alan Fitzsimmons, who studies meteors at Queen’s University Belfast and was not part of the recent research. Encke’s oddness doesn’t rule out a giant comet origin, but it makes it less likely, Fitzsimmons says.

The second team, led by Marcel Popescu of the Observatoire de Paris, found that one of the six largest targets related to the Taurids could be a meteorite primitive enough to be related to Encke. “One is a positive result,” says Popescu, pointing out that his team’s observations support the possibility of a giant comet origin.

Scanning the Skies

Clark and his colleagues are currently working to find telescopes to target the Taurids. They currently plan to study the heart of the stream with the Canada-France-Hawaii Telescope in Hawaii and are hoping to get observations from the Dark Energy Camera in Chile.

“There’s nothing to be worried about, but keep an eye on the sky and you might…see something pretty impressive.”The largest objects in the Taurid swarm may show up on surveys already dedicated to hunting down NEOs. “We’ll use as many resources as people volunteer to take us up on,” Clark says.

The cluster will likely be too dim for amateur astronomers to spot.

Although scientists aren’t anticipating a significant threat to Earth this summer, they are looking forward to seeing fireballs.

“There’s nothing to be worried about, but keep an eye on the sky and you might…see something pretty impressive,” Fitzsimmons says. “The chance of [an] impact by a large Taurid is still low, but the chance of seeing a really impressive fireball burning up in the upper atmosphere is not so bad.”

—Nola Taylor Redd (@nolatredd), Freelance Journalist

Mmm, Salt—Europa’s Hidden Ocean May Contain the Table Variety

Tue, 07/09/2019 - 11:47

If you’re wagering where life might exist beyond Earth, Europa is always a safe bet. That’s because this moon of Jupiter has a liquid-water ocean beneath its icy surface, making it a likely incubator for marine life. Researchers now have shown that Europa’s ocean probably contains sodium chloride (NaCl), the same stuff we sprinkle on our french fries and also the dominant form of salt in our own planet’s ocean.

A Weird, Watery World

Europa, Jupiter’s fourth-largest moon, is an enigmatic world: Ultraviolet auroras dance over its poles, its surface has hot and cool spots, and enormous blades of ice may be clustered around its equator. Since the 1970s, researchers have hypothesized that Europa might harbor a liquid ocean under its icy surface. Spacecraft-based observations have since confirmed Europa’s hidden ocean and shown that it’s salty, and water vapor plumes spotted emanating from the moon have provided additional evidence of its watery interior.

Samantha Trumbo, a planetary scientist at the California Institute of Technology in Pasadena, and her colleagues now have studied the chemistry of Europa’s ocean. They did so by investigating the moon’s surface, specifically, geologically young regions called “chaos terrain” where ocean water likely upwells. In these areas, the icy surface looks to have been wrenched apart, said Trumbo. “These regions are probably the most representative of the internal composition [of Europa].”

Hubble Space Telescope Looks for Salt

In 2017, Trumbo and her collaborators collected spectroscopic observations of Europa’s surface using the Hubble Space Telescope. The data, with a spatial resolution of roughly 150 kilometers, spanned from ultraviolet to infrared wavelengths.

The researchers were looking for two absorption features characteristic of sodium chloride that’s been bombarded by high-energy electrons. (These electrons, which originate mostly from volcanic eruptions on Jupiter’s moon Io, alter NaCl’s crystalline structure.) The two absorption features fall within the blue and red parts, respectively, of the visible spectrum.

Finding evidence of sodium chloride was a bit of a surprise.Trumbo and her team found just the absorption feature in the blue part of the visible spectrum, which actually makes sense, said Trumbo. Laboratory experiments that re-create the absorption in the red part of the visible spectrum bombard sodium chloride with 10,000–100,000 times the true radiation flux at Europa’s surface, she said. “At the real flux levels of Europa, this feature would never form.”

Finding evidence of sodium chloride was a bit of a surprise, said Trumbo. “Sulfates on the surface of Europa have been the prevailing view since the Galileo mission in the 1990s.”

To confirm that sodium chloride was really the cause of the absorption they observed, the researchers took spectra of other irradiated salts like magnesium sulfate (MgSO4), calcium carbonate (CaCO3), and magnesium chloride (MgCl2) in the laboratory. None of the compounds they tested exhibited absorption in the blue part of the visible spectrum, and several had strong absorption features at other wavelengths that the researchers didn’t see in their Hubble data.

Concentrated in Chaos

Trumbo and her colleagues found that sodium chloride on Europa was mainly concentrated in the chaos terrain. Because that’s where subsurface water likely upwells, this finding is consistent with the salt deriving from the moon’s ocean, the researchers suggest.

If NaCl exists in the moon’s subsurface ocean, it “suggests that Europa’s ocean might be more Earth-like compositionwise than previously thought.”“This marked correlation with geologically young chaos regions suggests an interior source,” they wrote in their paper, which was published in June in Science Advances.

“This new study took a novel approach that combines telescopic observations, laboratory experiments, and geochemical analysis,” Xianzhe Jia, a planetary scientist at the University of Michigan who was not involved in the research, told Eos. If NaCl exists in the moon’s subsurface ocean, it “suggests that Europa’s ocean might be more Earth-like compositionwise than previously thought.”

Scientists are looking forward to getting a closer look at Europa and its ocean with NASA’s Europa Clipper mission, which will put a spacecraft in orbit around Jupiter. The spacecraft, slated to launch in the 2020s, will fly as close as 25 kilometers to Europa’s surface—the closest flyby ever of this moon—and will analyze the celestial body using a suite of cameras, thermal imagers, spectrographs, and ice-penetrating radar.

—Katherine Kornei (@katherinekornei), Freelance Science Journalist

Solar Properties Rival for Control of Mars’s Bow Shock

Tue, 07/09/2019 - 11:30

Every planet with an atmosphere or magnetic field has a “bow shock,” a boundary at which the supersonic solar wind suddenly slows down so it can be diverted around the planet. At Earth, the main controlling factors for the location of this bow shock are properties of the solar wind, especially its speed. Mars has an atmosphere but does not have a strong internal magnetic field, so this boundary is quite close to the planet.

By examining 11 years of bow shock crossings by the Mars Express satellite, Hall et al. [2019] reveal that the solar flux of extreme ultraviolet light, the photons responsible for ionizing the planet’s upper atmosphere, is an equal contributor to the solar wind flow parameters for controlling the location of the bow shock at Mars. Properties of the solar wind are important on short time scales, but this influence is superimposed on a longer-term variation governed by the expansion of the neutral atmosphere from EUV heating and ionization.

Control of the bow shock by the solar EUV flux is completely negligible at Earth, revealing a key difference between the space environments of these two neighboring planets. This finding should to be taken into account when interpreting other observations around Mars, in particular planetary atmospheric loss to deep space.

Citation: Hall, B. E. S., Sánchez‐Cano, B., Wild, J. A., Lester, M., & Holmstrom, M. [2019]. The Martian bow shock over solar cycle 23–24 as observed by the Mars Express mission. Journal of Geophysical Research: Space Physics, 124. https://doi.org/10.1029/2018JA026404

—Mike Liemohn, Editor in Chief, JGR: Space Physics

University of Alaska Faces Budget Crisis

Mon, 07/08/2019 - 19:04

Alaska Governor Mike Dunleavy’s decision to slash the state’s funding for the University of Alaska by nearly 41% has the academic and research community up in arms. They are waging a nail-biter campaign to convince the state legislature to override what they say will be devastating and long-lasting damage to the university and the state, as well as to world-class research about the Arctic, climate change, and a host of other disciplines.

Dunleavy’s “draconian cut” of $130 million is “really stunning,” University of Alaska president James Johnsen told Eos.

Johnsen is spearheading an effort to restore the $322 million in state funding that the legislature approved for the current fiscal year that began on 1 July but that the governor vetoed on 28 June. That figure was already down from the $327 million the university received in the previous fiscal year.

If the budget cut is enacted, it “will strike an institutional and reputational blow [to the university] from which we may likely never recover.”The university system’s entire annual budget is about $900 million, with other streams of revenue including tuition, research grants, and contracts, according to Johnsen.

The legislature could override Dunleavy’s veto decision during a special legislative session that begins today, 8 July, in Wasilla. An override requires 45 of the 60 members of the legislature. Johnsen said that threshold is “an extremely heavy lift” but that his team “is working very, very hard to make it happen.”

If the budget cut is enacted, it “will strike an institutional and reputational blow [to the university] from which we may likely never recover,” Johnsen wrote to the University of Alaska (UA) community on 28 June. The entirety of the reduction, he wrote, is targeted on the appropriation for the University of Alaska Fairbanks (UAF), University of Alaska Anchorage (UAA), and the university’s administrative unit that provides central services for the entire system. Johnsen told Eos that the cuts likely would result in fewer faculty, lower student enrollment, and reduced research and tuition revenues.

The University of Alaska is facing a budget crisis. Hundreds gathered at the Anchorage Legislative Building on 2 July for a rally to protest the vetoes and show support for the University of Alaska. Credit: University of Alaska Anchorage

The budget crisis facing the university, social services, and other programs across the state is tied to Dunleavy’s determination to provide Alaskan residents with $3,000 through the state’s Permanent Fund Dividend (PFD). Since PFD’s initial dividend year of 1982, the largest previous payment was about $2,100.

Dunleavy “would rather give every Alaskan 3,000 bucks and let them spend it as they wish than provide funding for public broadcasting, Medicaid, homeless shelters, K–12, pre-K, [and] university education,” Johnsen told Eos, noting that the governor thinks that individuals make better decisions about their money than the government does. “I don’t think it’s good for our state, but that’s his whole philosophical view.”

The governor has ties to the libertarian Koch brothers and their conservative advocacy group, Americans for Prosperity, with that group’s Alaska chapter having sponsored the governor’s field hearings about the budget earlier this year, according to the Anchorage Daily News and the Center for Media and Democracy, a watchdog group based in Madison, Wis.

Chance for an Override?

Some political analysts think that as many as 40 state legislators may already be in favor of overriding the governor’s decision, but getting the remaining five votes will be difficult. One state legislator favoring an override is Rep. Grier Hopkins (D-Fairbanks). Hopkins told Eos that the effects Dunleavy’s decision would have on the university system “are immeasurable” and would spread across the state.

“A broad bipartisan coalition of state lawmakers adamantly opposes the governor’s lack of vision for Alaska’s economy and university system, and we will work together to achieve the three quarter vote needed to override” the governor’s “reckless” cuts, said Hopkins, whose legislative district includes UAF. Hopkins, the chair of the House Energy Committee, is a member of the Alaska House Majority, a 24-member coalition of Democrats, Republicans, and Independents that controls the House of Representatives. “We will do everything in our power to protect the important research that occurs every day at UA,” he said.

A Sad Day for Alaska

Fran Ulmer, a former UAA chancellor and former Alaska lieutenant governor, told Eos that Dunleavy’s cuts “are disastrous.” She said that the damage to the university, students, faculty, and staff will have long-lasting impacts on the future of the state. “It takes a long time to build up the university’s competency and public confidence in the excellent programs that have been developed [at the university]. It only takes one very unfortunate moment for the governor to destroy that with his red pen. A sad day for Alaska.”

“If we have to absorb the governor’s budget cuts, it will eventually devastate the university,” Larry Hinzman, UAF’s vice chancellor for research, told Eos. “It will take decades of return funding to recover.”

The University of Alaska produces more journal articles, and receives more citations, on the Arctic than any other institution in the world. Pictured, UAF Geophysical Institute graduate student Joanna Young sets up a steam drill to install stakes for measuring glacier melt on the Jarvis Glacier, about 56 kilometers south of Delta Junction in east central Alaska. Credit: Todd Paris, University of Alaska Fairbanks

Hinzman pointed out that the university is a leader in environmental, climate change, and Arctic system research, as well as research on Arctic culture and social issues. He also noted that UAF produces more journal articles, and receives more citations, on the Arctic than any other institution in the world. Hinzman stressed that the university and its research are major contributors to Alaska’s economy and development.

One example of tangible economic benefits the university provides to the state is its research on permafrost, which is present in 80% of Alaskan land, according to Dmitry Streletskiy, president of the nonprofit U.S. Permafrost Association and associate professor of geography at George Washington University. In a 2 July letter to Alaska State Senate President Cathy Giessel, Streletskiy urged overturning Dunleavy’s veto.

“The proposed budget reductions, if implemented, would cause an irreversible impact on the training, engineering, and research capabilities that are required to sustain Alaska’s present and future economies.”“The extraordinary knowledge base and human resources pertaining to permafrost that currently exist in Alaska are in large part the results of past and ongoing teaching and research faculties and facilities that exist on the Fairbanks and Anchorage campuses,” Streletskiy wrote. “The proposed budget reductions, if implemented, would cause an irreversible impact on the training, engineering, and research capabilities that are required to sustain Alaska’s present and future economies. This would come at a time when warming and thawing of permafrost is accelerating and related mitigation techniques are required. Maintaining the University’s ongoing contributions in engineering and science is critical to future economic development and resource management in Alaska.”

Mark Myers, former UAF vice chancellor for research and former director of the U.S. Geological Survey, told Eos that the budget cuts are upsetting because research in the Arctic is so important and the university’s efforts—through its programs and its national and international collaborations—provide a great vehicle to understanding the Arctic.

Myers added that Alaska is a resource-based economy that needs to diversify. “Well, if it wants to diversity into a knowledge-based economy, what leads that in Alaska is the university’s research and development and training of the students and developing their critical thinking skills and their research capacity.”

An Inopportune Time for Cuts

Other scientists at the university also spoke out strongly against the cuts.

Hajo Eicken, director of the International Arctic Research Center at UAF, said that the cuts not only are disruptive but are coming at an inopportune time for the state. In terms of environmental upheaval and socioeconomic shifts in a broader geopolitical context, Alaska is facing “some of the most substantial changes you are seeing within the U.S. or possibly even throughout the Arctic at the moment,” he told Eos.

“Alaska is at a major crossroads. Fifty years from now, 20 years from now, probably 10 years from now, the state likely will look very different than what it looks like today. The university is the only tool the state has to look into the future in a broad and educated informed way and help the state figure out what are our options. By enforcing these cuts, the legislature and people of Alaska are depriving themselves of that opportunity to look at these different options.”

Robert McCoy, director of the Geophysical Institute at UAF, said that the governor’s “devastating” cuts caught everybody by surprise. With interest in the Arctic increasing, “we are in a great place to assist federal agencies and other universities and internationally in Arctic and global change research,” he said. “This budget cut is really going to hurt.”

Alex Webster, a postdoctoral scholar at the Institute of Arctic Biology at UAF, added that the cuts would mean that the university will not be able to attract or retain bright young minds in the future.

“I fear for what Alaska will look like following the massive brain drain that is to ensue,” she told Eos. “The University of Alaska is a beacon of education, research, and innovation in what would otherwise be a mostly extraction-driven economy. It is a critical hub for global climate change research in particular. Everyone, from everyday Alaskans, to the international research community, will suffer if it is sent into the death spiral that these cuts are designed to create.”

Devastating Cuts for Research and for Students

United Academics, which represents about 1,200 full-time faculty at the university, has come out forcefully in opposition to the cuts.

Critics warn that proposed cuts to the University of Alaska will result in lower enrollment and a brain drain from the state. Pictured, undergraduate students participate in field studies at the University of Alaska Anchorage’s Kachemak Bay Campus of Kenai Peninsula College located in Homer, Alaska. Credit: Kenai Peninsula College

United Academics president Abel Bult-Ito told Eos that the cuts “will decimate the ability for our faculty to continue their highly competitive research on climate change, Arctic processes, environmental, Earth, and space science, biomedicine, and many other fields. We are currently the world leader in Arctic research, but these cuts will most certainly not allow us to maintain this position. These cuts also would be devastating for our students. They will not know whether their degree program will be left standing, their prospects of finishing their degree will be uncertain, and enrollment will plummet. Why would students want to come to UA if their educational opportunities are severely cut and they do not know whether their program will still exist?”

“The irony is that this is an entirely manufactured budget crisis to dismantle public services and public education and kick Alaska back into the Dark Ages.”Bult-Ito, who also is a professor of neurobiology and anatomy at UAF, said that university faculty “will fight tooth to nail to make sure that the harm done to our students will be as little as possible.”

The budget cuts, Bult-Ito added, are terrible for the university and represent “a dismantling of the state of Alaska” by harming residents across the state, including students, the young, and the elderly. “The irony is that this is an entirely manufactured budget crisis to dismantle public services and public education and kick Alaska back into the Dark Ages in which Alaska will be reduced to a resource extraction colony for multinational corporations that leaves workers and the people of Alaska begging for the scraps,” he said.

Next Steps

If the veto override efforts fail, the university’s board of regents has directed UA president Johnsen to prepare a plan for financial exigency by 15 July, which would permit the university to rapidly discontinue programs and academic units and start the process of removing tenured faculty, according to Johnsen’s 28 June letter to the university community.

If the override fails, Johnsen told Eos that the university is “going to do our best to fence off our super strong research institutes so that they are not impacted negatively.” He said that one criterion the university will use to evaluate programs will be a consideration of the return on state investment. Currently, $1 of state money put into research at the university generates $6 of nonstate funds, according to Johnsen.

The size of the budget cuts means tough challenges, Johnsen said. “Does one nickel and dime every one of our universities and close some community campuses?” Johnsen said. “Well, you can close all the community campuses and you get to about $30 million. You’ve got [about] another $100 million to go.”

Johnsen and others, however, hope the legislature overrides the governor’s budget cuts. “You’re not going to have a great state without a great university,” Johnsen said.

—Randy Showstack (@RandyShowstack), Staff Writer

9 July 2019: This story has been updated to correct the name of Alaska State Senate President Cathy Giessel.

Forgotten Legacies: Understanding Human Influences on Rivers

Mon, 07/08/2019 - 19:01

This is an excerpt from a submission to the AGU Grand Challenges Centennial Collection that has been edited for length and clarity. Read the paper in its entirety here.

Rivers are fundamental landscape components that provide vital ecosystem services, including drinking water supplies, habitat, biodiversity, and attenuation of downstream fluxes of water, sediment, organic carbon, and nutrients. Extensive research has been devoted to quantifying and predicting river characteristics such as stream flow, sediment transport, and channel morphology and stability. However, scientists and society more broadly are often unaware of the long-standing effects of human activities on contemporary river ecosystems, particularly when those activities ceased long ago, and thus, the legacies of humans on rivers have been inadequately acknowledged and addressed.

The long history and ubiquity of human alterations on river ecosystems have resulted in several grand challenges for scientists and society today.Legacies, in this context, are defined as persistent changes in natural systems resulting from human activities. Legacies that affect river ecosystems result from human alterations both outside river corridors, such as timber harvesting and urbanization, and within river corridors, including flow regulation, river engineering, and removal of large-wood debris and beaver dams.

Failure to recognize the legacies of historical activities can skew perceptions of river process (the interactions among and movement of materials in a river system) and form (the physical configuration and characteristics of the land and vegetation in a river system) as well as the natural range of variability in river ecosystems, which in turn hampers informed decision-making with respect to river restoration and management efforts. This scenario has played out prominently, for example, with rivers in the Mid-Atlantic Piedmont and Pacific Northwest of the United States.

The long history and ubiquity of human alterations on river ecosystems have resulted in several grand challenges for scientists and society today: (1) recognizing the existence of legacies that continue to affect river ecosystems; (2) understanding the timing, type, and spatial extent of legacy sources and the intensity of human activities that caused them; (3) understanding the implications of legacies on river process, form, and ecosystem services; and (4) designing river management and restoration strategies that enhance ecosystem services.

Characteristics of River Ecosystems

A river corridor can be described with respect to process and form. Process describes the fluxes of materials within a river corridor and the interactions between these materials, as well as the physical configuration, biogeochemical characteristics, and biotic communities of the river corridor. Process thus includes interactions as diverse as channel bank erosion, nitrate uptake, or germination of riparian vegetation on newly deposited sediment. Form includes the geomorphic configuration of the land surface, the vegetation occupying that surface, and the stratigraphy underlying the surface.

Water and sediment are typically considered the primary inputs to river corridors that impact river process and form. In places where large wood is naturally abundant, however, it can also play a major role, significantly influencing fluxes of water and sediment as well as the form and geomorphic and ecological function of river corridors. Here U.S. Forest Service employees inspect a structure of large logs emplaced on a creek on Prince of Wales Island, Alaska, to restore salmon spawning habitat. Credit: U.S. Forest Service photo by Ron Medel

The three primary physical inputs to river corridors are water, sediment, and large wood, which interact to sustain river ecosystems. Changes in the fluxes, behavior, and interactions of these materials result in changes in river process and form and can differ depending on whether a river system is in a natural state, or regime, in the absence of human alterations or if human activity has altered the system. Water and sediment are commonly considered the most significant inputs to river corridors compared to large wood. Where large wood is naturally abundant, the wood significantly influences fluxes of water and sediment within, as well as the form and geomorphic and ecological function of, river corridors.

Fundamentally, river corridors are dynamic systems that change continually in time and space, with adjustments occurring in response to changing boundary conditions, such as inputs of water, sediment, and large wood or shifts in base level, or in response to internal thresholds and feedbacks.

Human Alterations of River Corridors Even in areas where the history of human alteration to rivers is relatively short, like the U.S. West, human influences have been sufficiently intense to fundamentally alter river corridor process and form. Prior to the mid-19th century, the Sacramento–San Joaquin River Delta, seen here in this false-color image captured by NASA’s Terra satellite in 2007, was an expansive marsh with continually shifting channels. But large-scale draining and reclamation of marshland for agriculture starting after the California gold rush, along with subsequent engineering projects and heavy water withdrawals for irrigation and consumption, converted the delta into a highly channelized and controlled river system. The transformation has contributed to large declines in native wildlife while numerous invasive species have taken hold. Credit: NASA image created by Jesse Allen, using data provided courtesy of NASA/GSFC/METI/ERSDAC/JAROS and U.S./Japan ASTER Science Team

Human alterations often begin with changes in land cover outside river corridors, which alter water, sediment, and large-wood yields to the corridors. A typical scenario is that increased sediment yield following clearing of native land cover causes river planforms to change. Conversely, reforestation following cessation of agriculture can reduce sediment yields and result in channel incision [Keesstra et al., 2005]. Other common human activities outside river corridors that alter inputs include urbanization [e.g., Meyer et al., 2005], altered topography [e.g., Hooke and Martin-Duque, 2012], and, indirectly, climate change [Goode et al., 2012].

At least four salient points arise from considering human alterations of process and form in river corridors. First, these alterations are ubiquitous: Human-induced climate change affects every watershed on Earth [e.g., Gosling and Arnell, 2016]. And more direct alterations, such as flow regulation, affect nearly every river basin in temperate latitudes [Nilsson et al., 2005a].

Second, human alterations of watersheds and river corridors have a long history in some regions. Land clearing that affected water and sediment yields is recorded in river valley stratigraphic records from 7,000 years ago in China [Rosen, 2008] and southeastern Europe [Lang and Bork, 2006]. The earliest known dam was built in Egypt circa 2800 BCE [Smith, 1971]. And construction of artificial levees dates back 3,500 years in China [Clark, 1982]. Even in regions with relatively short histories of intensive human alterations of river corridors, like parts of the United States and Australia, human activities have had sufficient duration and intensity to fundamentally alter river corridor process and form [e.g., Brooks et al., 2003].

Third, multiple forms of human activities often overlap in time and space. Upland deforestation and mining during the latter half of the 19th century in the Southern Rockies of Colorado, for example, occurred synchronously with flow regulation, removal of large wood from river corridors, and alteration of floodplains via construction of roads and railroads [Wohl, 2011].

The effects of historical human alterations have, in some cases, been forgotten by society when the activity that triggered the alteration is no longer occurring.Finally, the effects of historical human alterations have, in some cases, been forgotten by society when the activity that triggered the alteration is no longer occurring [e.g., Phillips, 1997]. A striking example of this involves the legacy of thousands of mill dams in the U.S. Mid-Atlantic Piedmont [Walter and Merritts, 2008]. Mid-Atlantic Piedmont streams in which legacy sediment accumulated behind now abandoned mill dams experienced a complete transformation from wide, shallow, and marshy valleys to sinuous forms lined with tall cutbanks, but the existence and cause of this metamorphosis were not widely recognized until the 2000s. The eventual breaching of abandoned dams led to increased sediment and nutrient yields downstream that were subsequently recognized as detrimental to nearshore environments in Chesapeake Bay, for example. Increasing recognition of the origins of these legacy sediments [e.g., Merritts et al., 2013] was associated with changes in restoration practices, including efforts to remove millpond sediment to restore natural wetland valley bottoms [Merritts et al., 2011; Forshay and Mayer, 2012].

A similar scenario occurred in the U.S. Pacific Northwest starting in the late 1970s as river scientists gradually recognized how much more abundant large wood had been in river corridors prior to intensive deforestation and river engineering starting approximately 150 years ago [e.g., Collins et al., 2012]. As understanding has grown of the integral role of large wood in forested river corridors, river management and restoration efforts have increasingly emphasized protecting and restoring downed wood in channels and on floodplains [U.S. Bureau of Reclamation and U.S. Army Engineer Research and Development Center, 2016].

In each of these examples, scientific recognition of the continuing effects of historical human alterations came as something of a revelation, changing the way scientists conceptualize river process and form in a particular region or type of river corridor. There is no reason to assume that analogous revelations will not occur in future.

The Implications of Human Influence The legacies of mill dams built on waterways in the Mid-Atlantic Piedmont of the United States offer a striking example of how societies can fail to recognize impacts of past human activities. Widespread settlement of the region starting in the late 17th century led to the construction of thousands of mill dams, such as this one built in 1900 along the New River in Fries, Va. These dams impounded sediment for decades to centuries, building up thick stacks that buried original channel bottoms behind the dams and contributing to the transformation of river corridors in the Piedmont from wide, shallow, and marshy to sinuous forms lined with tall cutbanks. Subsequent breaches of some long-abandoned and forgotten dams sent increased loads of sediment and nutrients downstream, proving detrimental to affected ecosystems, but the mill dams were not widely recognized as the true sources of the higher sediment loads until early this century. Credit: IxieVerns, CC BY-SA 4.0

Human activities have significantly affected fluxes of water, sediment, and large wood to and within river corridors. Notably, temporal and spatial fluctuations in these fluxes have decreased [Poff et al., 2007; Wohl et al., 2015]. These alterations have had many other effects as well, such as reductions in lateral mobility of river channels, which reduce the spatial heterogeneity of river systems and lateral connectivity between channels and floodplains [Florsheim et al., 2008]. Connectivity within river corridors has also declined [Kondolf et al., 2006]. These changes have simplified and homogenized river corridors, with attendant losses of habitat abundance and diversity [Peipoch et al., 2015], decreases in water quality [Erisman et al., 2013], and elevated rates of extinction for freshwater organisms compared to terrestrial organisms.

The default assumption is still commonly that outside of urban areas or obviously altered river corridors, conditions are relatively natural or at least reflect predominantly natural processes.Although several high-profile papers have promoted recognition of the cumulative global effects of human alterations [e.g., Steffen et al., 2015], individual investigators have arguably not fully integrated the implications of human alterations into their research for specific watersheds. The default assumption is still commonly that outside of urban areas or obviously altered river corridors, conditions are relatively natural or at least reflect predominantly natural processes.

Ignoring the continuing effects of human alterations of watersheds can lead to fundamental misinterpretations of river process and form. And if we start with a misperception of the dynamic character or natural form of a river, we may develop less effective management targets.

Grand Challenges

The grand challenges resulting from historical human alterations of river corridors are fourfold. The first challenge is to recognize the presence of legacies that continue to influence river corridors. The second is to understand the sources of legacies, including their timing and chronology, the types and intensity of human activity involved, and the extent of the alterations. The third challenge is to understand the implications of legacies: How have process and form changed within a given river corridor, and how has this affected river functionality or ecosystem services? Is the legacy continuing to alter form and process? The fourth grand challenge is to design management or restoration strategies that can improve, or at least mitigate the loss of, river functionality or ecosystem services.

Understanding the sources and implications of legacies can be extremely difficult in regions where watersheds have experienced centuries of human influence. Reference sites in watersheds or along river reaches that have experienced minimal human alteration are thus invaluable. Knowledge of reference conditions and of the natural range of variability from historical [Pastore et al., 2010] and geological [Rathburn et al., 2013] archives provides critical insights for forecasting responses of rivers to natural disturbances or human alterations and for making effective river management decisions [e.g., Brierley and Fryirs, 2005, 2016]. For example, insights into the habitat requirements of potentially at risk species under reference conditions can help river managers target critical factors, such as the minimum duration of overbank flooding needed for successful fish spawning and rearing [Galat et al., 1998].

The greatest hurdle for effective river management occurs when we mistake human-altered for natural process and form and thus fail to consider the full range of management strategies.The fourth grand challenge is determining how to incorporate knowledge of natural conditions and human alterations into river management. There are various approaches to accomplish this. One is to maintain or restore characteristics of a river corridor that create a desired process. This approach underlies, for example, the restoration of riparian vegetation as a buffer to retain upland inputs of nitrogen, phosphorus, and fine sediment. Another approach is to create a template of river corridor form that will facilitate desired processes. Examples include the emplacement of engineered logjams [Roni et al., 2014] or beaver dams [Bouwes et al., 2016] to mimic the function of natural features, setting back levees to restore channel-floodplain connectivity [Florsheim and Mount, 2002], and removing artificial barriers to allow high flows to return to abandoned channels [Nilsson et al., 2005b].

The different approaches have advantages and limitations, but each relies on insights into how human activities have modified river corridors. Undoubtedly, the greatest hurdle for effective river management occurs when we mistake human-altered for natural process and form and thus fail to consider the full range of management strategies.

Legacies and Perceptions A massive effort to restore the meandering course of central Florida’s Kissimmee River and allow seasonal rains and natural flooding to once again inundate the surrounding floodplain began in 1999 and is ongoing. The floodplain provides habitat for a variety of native wildlife, but damaging floods in the mid-20th century led for calls to straighten, deepen, and channelize the river to mitigate flooding. Through the 1960s, the U.S. Army Corps of Engineers accomplished this by redirecting the river into the C-38 canal. Part of the canal, visible at right here, is now being backfilled as part of the restoration. The episode illustrates how the natural state of rivers as dynamic systems is frequently at odds with societal expectations for attractive, simple, and stable rivers. Credit: JaxStrong, CC BY 2.0

Perception governs our understanding of the natural world: What we experience is what we understand to be normal. In this context, fully natural river corridors strike some people as dangerous, unnatural, and in need of restoration and management. Similarly, river corridors that have been heavily modified by humans are sometimes perceived as natural and fully functional with respect to ecosystem services. These sorts of perceptions are the greatest obstacle to understanding forgotten legacies of human alterations in river corridors. They also hinder management and restoration efforts.

Even regulators who are enthusiastic about restoring river systems are sometimes constrained by policies created to protect existing habitat and that measure the quality and value of existing conditions with assessment tools that do not account for legacy effects. Because these policies were developed on the basis of modern streams, they limit the possibilities for restoring many lost ecosystem functions.

Scientific understanding of rivers as dynamic, heterogeneous, nonlinear ecosystems is commonly at odds with societal expectations for attractive, simple, stable rivers. As the global human population continues to grow and becomes ever more urbanized, questions emerge about how much space, water, sediment, and wood rivers need to provide vital ecosystem services. And how can the necessary space and inputs be reclaimed from humans? Phrased differently, at what point in the downward spiral of river engineering, damage from natural hazards like floods, and loss of ecosystem services do we reenvision future directions and move people and infrastructure out of river corridors [e.g., Perry and Lindell, 1997]? Effectively addressing these questions requires that we understand how past human activities have modified river corridor process and form, as well as how those past alterations constrain river science and management going forward.

A North Carolina Lake’s Long Legacy of Coal Ash Spills

Mon, 07/08/2019 - 12:02

Today Sutton Lake in North Carolina, on the Cape Fear River, is known for year-round largemouth bass fishing, but the lake’s history is not pristine: The 4.5-square-kilometer lake was originally created in the 1970s as an impoundment for a coal-fired power plant.

A new study looking at sediments from the lake bottom has found evidence of repeated spills from an adjacent pile of coal ash tailings. The implications for consuming fish caught in the lake and the ecological health of the Cape Fear River are still unknown.

When Hurricane Florence made landfall on the coast of North Carolina in September 2018, it brought record high storm surges and rainfall that triggered unprecedented flooding. In the aftermath, geochemist Avner Vengosh of Duke University and colleagues were interested to see whether the storm event had mobilized coal ash into Sutton Lake.

“It’s clear that there is a long legacy of coal ash getting into the lake and not just during the recent storm event.”“We thought we might see traces of coal ash in the lake bottom sediments, but the levels we found were surprisingly high,” Vengosh says.

The team used multiple lines of evidence to identify the presence of coal ash solids in the sediments, including strontium isotope ratios, heavy-metal distributions, magnetic susceptibility analyses, and visual observation of coal ash particles in the microscope, they reported in Science of the Total Environment.

But then the team also tested sediment samples collected in 2015, before Florence, and found similar levels of contamination. “It’s clear that there is a long legacy of coal ash getting into the lake and not just during the recent storm event,” Vengosh says.

Coal ash is no longer produced by the Duke Energy power plant, which was converted to natural gas in 2013, but decades’ worth of coal combustion residuals are stored on site in several impoundments and an open landfill next to the lake. The setting is typical of outdated coal ash storage facilities found all over the United States, says James Hower, a geochemist at the University of Kentucky in Lexington who was not involved in the new study.

“These impoundments were constructed to the standards of the time, but we have since learned that these standards are not adequate to contain this material long term,” Hower says.

Some of these loose piles have since been deposited into modern landfills that are lined and covered with soil and vegetation. “If properly drained and monitored, they are quite stable,” Hower says.

“The topic of Sutton Lake’s sediment is not new. We’ve shared similar sediment results going back to the mid-90s with the North Carolina Wildlife Resources Commission, so these findings are not at all surprising. We strongly disagree with the study and how it characterizes Sutton Lake. Sutton Lake’s fish are healthy, thriving and safe from coal ash impacts,” says Bill Norton, director of corporate communications for Duke Energy.

Coal Ash Contamination

But the majority of coal ash impoundments and landfills in the United States are not lined, Vengosh says. A 2019 report found that groundwater and surface water adjacent to hundreds of coal ash disposal sites are contaminated by coal ash.

Coal ash contains high levels of toxic and carcinogenic compounds that can pose ecological and human health risks if not properly contained, Vengosh says. Coal-fired power plants in the United States continue to produce about 100 million tons of coal ash a year, about half of which is stored in landfills or impoundments. Because coal-fired power plants require copious amounts of water in their operations, these plants and landfills are often located near lakes and rivers.

“What’s happened at Sutton Lake highlights the risk of large-scale unmonitored spills occurring at coal ash storage sites nationwide.”The other 50 million tons are used by the cement industry in their products. Despite its known toxicity, the Environmental Protection Agency has historically been reluctant to classify coal ash as a hazardous waste because it may impact the cement industry’s willingness to utilize it, Vengosh says. “Then the environmental impact of coal ash could be even larger.”

Sutton Lake serves as a case study that represents a much bigger problem of improper coal ash storage.

“What’s happened at Sutton Lake highlights the risk of large-scale unmonitored spills occurring at coal ash storage sites nationwide,” Vengosh says. “The risks are especially high in the Southeast where we have a large number of coal ash impoundments in flood-prone areas and tropical storms and hurricanes are common occurrences.”

Whether the coal ash is making its way downstream into the Cape Fear River in significant quantities is a potential topic for further research, Vengosh says. Several studies have detected high levels of selenium in fish caught in Lake Sutton. “The health risk for people who eat these fish has not been studied yet.”

—Mary Caperton Morton (@theblondecoyote), Science Writer

9 July 2019: This article has been updated to include a response from Duke Energy.

Changes to the Eos Scientist-Authored Submission Process

Mon, 07/08/2019 - 12:01

Five years ago, AGU transformed its news publication from a weekly tabloid mailed to members into a free-to-all, digital-first publication at Eos.org. Our website has allowed us to bring the work of Earth and space scientists directly to the public by reporting on the most updated research and publishing work by scientists who explain why their work is so important and give us a behind-the-scenes look at how it all happens.

Eos’s Gold EXCEL awards for General Website Excellence and Editorial Excellence in Digital Media. Credit: Melissa Tribur

In recognition of this work, on 24 June, Eos took home two 2019 Gold EXCEL awards for Digital Media—in Editorial Excellence and General Excellence—from Association Media & Publishing. We are so proud to present articles to our readers of this caliber and eager to take the next step in working with our science authors.

Today, we’re excited to announce several changes at Eos, which I’ll describe for you, our scientist-authors, below:

transition from manuscript submission to article proposal submission guided process for authors of accepted proposals closing of the GEMS portal for Eos updated content types on Eos New Proposal Submission

With a goal to make the publishing process better for scientists, Eos is transitioning to a proposal submission process. New guidelines for authors and FAQs are online now, which offer a better explanation of our content types.

Our proposal form is streamlined and easy to fill out. Authors should be prepared to submit the following:

the focus of the article (100 words) the key points the article will make (200 words) why this article is important for Eos readers (200 words)

Each proposal will be reviewed for both scientific content and interest to our readers by our science advisers, experts in their fields representing each AGU section.

Authors whose proposals are accepted will be put in touch with an Eos staff editor and receive guidance on how best to approach writing their manuscript. We hope that this new process assists scientists in communicating their important work to a broader audience and reduces the editing time on resulting manuscripts.

GEMS Closes in September

Until 3 September, we will continue to accept direct manuscript submissions through GEMS; after that date the portal will close.Until 3 September, we will continue to accept direct manuscript submissions through GEMS; after that date the portal will close.

Manuscripts can still be submitted through our proposal submission form, though we highly encourage scientists to take advantage of our new article proposal system immediately.

Note: Once GEMS for Eos closes, scientists will still have access to their archived submissions for Eos. This change has no effect on access to AGU journals.

Content Type Changes

We’ve also updated our content types to better align with what authors are submitting.

Project Updates have been renamed Science Updates.

The Meeting Report content type has been eliminated. These reports—which must consist of insight and contextual information about discussions or results from the meeting, workshop, or conference—can now be submitted under Science Updates.

Eos’s Ongoing Mission to Share Science

Our goal, as always, is to ensure your important science reaches the worldwide science community and the science-interested public.Our goal, as always, is to ensure your important science reaches the worldwide science community and the science-interested public.

By making these changes, not only will we better accomplish that goal, but we’ll make the process easier and quicker—an important consideration given how many responsibilities we know scientists are balancing. On behalf of the entire Eos team at AGU and our science advisers, we look forward to engaging our readers with your critical contributions to Earth and space science.

—Heather Goss (@heathermg), Editor in Chief

An Evolutionary Leap in How We Teach Geosciences

Mon, 07/08/2019 - 12:01

The content and skills that we teach our geoscience students benefit from developments in the knowledge and practices of the many research fields within the geosciences. In much the same way, what we teach and how we teach should be informed by research on geoscience teaching and learning itself—this has always been true. However, geoscience education research is now entering a new stage and is poised to take an evolutionary leap [Arthurs, 2019]. If we make this leap successfully, teaching practice and student learning will reap the benefits.Education is at the core of nurturing the next generation of diverse Earth scientists, as well as growing public understanding of how Earth science is relevant to everyday lives and how it helps address pressing problems facing humanity.

The idea that geoscience education research (GER) should inform the ways that geosciences are taught is not new. Eos has been a venue to share several GER advances, such as how geoscientists think and learn [Kastens et al., 2009], methods for teaching geoscience to large classes [Butler et al., 2011], and helping students develop spatial reasoning skills [Davatzes et al., 2018].

One next step in this process, the AGU Council’s formation last year of an Education section, reflects the value that AGU members place on teaching and learning about Earth. Education is at the core of nurturing the next generation of diverse Earth scientists, as well as growing public understanding of how Earth science is relevant to everyday lives and how it helps address pressing problems facing humanity.

Grand Challenges for Geoscience Education Research

Recently, more than 200 geoscience educators and researchers engaged in an important community-building and research-strengthening process, which spanned a year and a half. Through a series of National Science Foundation–funded activities, they defined a set of priority research questions, or “grand challenges,” on 10 themes relevant to undergraduate geoscience education.

Grand challenge exercises have been essential steps in other geoscience communities, including tectonics research [Huntington et al., 2018] and scientific ocean drilling (Integrated Ocean Drilling Program). When done well, they enable a community to identify research needs and opportunities going forward. This is what the GER community has accomplished. The result is A Community Framework for Geoscience Education Research [St. John, 2018], a prioritized research agenda and a catalyst for action.

Geoscience Education Researchers Address Stubborn Problems

The questions guiding future research on undergraduate geoscience education span a web of topics. These include research on how students learn geoscience content, their development of workforce-necessary skills, and our ability as instructors to design and implement curricular pathways for the success of all students [St. John, 2018]. Within these areas are stubborn problems that will require action at multiple scales [St. John and McNeal, 2017], from case studies to large, multi-institutional studies, conducted by diverse teams of researchers.

Here we give three examples of stubborn problems in the modern landscape of undergraduate geoscience teaching and learning. Each example highlights needs and opportunities to tackle challenging teaching and learning questions of our time by rethinking our assumptions, drawing from new empirical data, and applying social science theories and methods to topics that matter to geoscience educators.

Overcoming Obstacles to Learning High-Stakes Science

Climate change is one of the most widespread geoscience challenges facing society today. This is an example of “high-stakes science,” which requires a broad response by a scientifically informed populace to avoid potentially costly and disastrous outcomes.

Compounding the problem and stalling possible solutions is the disconnect between the public and scientific understandings of climate change. Psychologists Weber and Stern [2011] attribute this disconnect to two fundamental factors. First, climate change is intrinsically difficult to understand because of its multiple drivers, consequences, and feedbacks, as well as its operation across different spatial and temporal scales. Second, scientists and nonscientists develop their understandings of the natural world in different ways. Scientists base conclusions and recommendations on data and logic, but for nonscientists, feelings and values often override systematic observations and measurements of climate phenomena.

It is our job as geoscience educators to break down that complexity and help students experience how science works. Social science research tells us that success likely requires learners to overcome serious obstacles [Paasche and Åkesson, 2019], such as the tentacles of fake news, groupthink, and failure to exercise an ability to think critically [Pennycook and Rand, 2019].

Currently, there is no consensus in the social science community for overcoming these obstacles. However, GER efforts to advance climate literacy are underway [e.g., McNeal et al., 2014; Buhr Sullivan and St. John, 2014]. In particular, work by Cook et al. [2017, 2018] shows promising results for teaching students how to critically analyze claims about climate change and inoculate them against climate misinformation.

These efforts are only the start to closing the gap between the public and scientific understanding of the high-stakes science of climate change. GER is uniquely qualified to address this challenge by integrating knowledge about how the Earth system (i.e., the climate system) works with knowledge about the theories and methods of social science research.

Complex Problem Solving Isn’t What It Used to Be

When many of us were undergraduates, a “problem” was often a textbook-style exercise with a single correct answer. This answer might have been printed in the back of the textbook, and we could solve the problem using a technique that had been explicitly taught in class.

We must teach convergent science because geoscientists have scientific expertise and valuable perspectives needed to address a range of economic, environmental, health, and safety challenges.In today’s undergraduate geoscience courses, students are increasingly being asked to grapple with complex, ill-structured problems at the intersection between Earth and human systems. Such problems have no single correct answer and no predetermined pathway toward solutions. Proposed solutions may involve values or ethics as well as science and technology. Such work has been called “convergent” science because solutions for problems must be converged on from different directions. This convergence is difficult to teach and learn.

However, we must teach convergent science because geoscientists have scientific expertise and valuable perspectives needed to address a range of economic, environmental, health, and safety challenges [Aster et al., 2016]. Research is needed on how societal problems, such as confronting climate variability, ensuring sufficient supplies of clean water, and building resiliency to natural hazards, can serve as effective context for teaching and learning in the geosciences.

Some help comes from a recent review paper by Holder et al. [2017], which provides a conceptual model to guide students in solving ill-structured problems. This model, which was calibrated against 11 empirical, classroom-based research studies, identifies elements of successful guidance. These elements include real-world relevance, collaboration among problem solvers, requiring students to analyze and interpret data, and the possibility to explore more than one pathway to a solution.  The geoscience education research community has only begun to explore the forms of coaching and scaffolding that can help individuals who struggle with these types of problem-based curricula.

Understanding how we can help students find and solve Earth-related problems that they care about in an information-rich society is a high priority for GER. One way we can make progress is by studying the problem-finding process itself. Defining authentic problems is not easy, but it is the first critical step to solving them.

Investigating this process will involve studies on how skilled geoscience problem solvers do their work, how learning occurs during problem solving, and what pedagogical approaches nudge students toward tackling problems as experts do. In addition, people from different backgrounds may perceive and prioritize different problems; therefore, inclusion is especially important in GER around problem solving and in problem-based learning.

We are beginning to explore forms of coaching and scaffolding that can help individuals struggling with these types of problem-based curricula (e.g., synthesis work by Holder et al. [2017]). This research direction is critical because we anticipate that people who learn to identify and solve convergent science problems as students will carry that skill set and habit of mind into their personal, civic, and professional adult lives.

The changing landscape of information technology (e.g., big data, emerging technologies, access to a wide variety of tools, rich multimedia) also affects the kinds and quantities of resources that are available for problem solving. Students must learn to navigate this rapidly changing space, identifying and harnessing resources (e.g., tools, data, models, experts, collaborators [Ebert-Uphoff and Deng, 2017]) that can be brought to bear on the convergent problems.

Employers articulate the importance of using data to solve problems, of learning to work on problems with no clear answers, and of managing the uncertainty associated with addressing these types of problems. However, the most effective strategies for learning how to manage and extract solutions from large data sets are not clear; therefore, this too is among the priorities for geoscience education researchers.

Learning Success and Essential Engagement

Another set of obstacles to learning is often overlooked. Imagine a situation where a student who did poorly on the last exam comes to your office and asks, “How can I better study for the next one?” You ask, “What did you do to prepare for the last exam?” The student’s response is typical of many introductory college students: “I reviewed the PowerPoint slides and looked at my textbook.”

Research from the social sciences has shown that students’ ability to reflect on what they know, what they don’t know, and what they need to do to improve is vital to the learning process, as are their emotions, attitudes, and beliefs. Research addressing these factors in a geoscience context may be the key to strengthening the foundation of our undergraduate courses.

Results of GER are promising: We have a preliminary understanding of what drives some introductory geoscience students to learn new content [van der Hoeven Kraft, 2011] and to study for exams [Lukes and McConnell, 2014]. However, we need to learn how these factors affect students’ abilities to advance from geoscience novices to geoscientifically literate citizens or to practicing geoscience experts.

Expanding underrepresented minority participation is perhaps the stubbornest problem in geoscience education.In addition, the pathways and identities of students may affect their emotions, motivations, and engagement in our courses. These in turn affect the likelihood of being attracted to and thriving in the geosciences. Given how the geosciences touch the lives of all people, it should also be a field that is representative of all people, but that is not yet the case.

Because the geosciences are the least diverse field in science, technology, engineering, and math [Sidder, 2017], with little improvement in diversity over the past 4 decades [Bernard and Cooperdock, 2018], expanding underrepresented minority participation is perhaps the stubbornest problem in geoscience education. Social science theories newly applied to the geosciences [e.g., Callahan et al., 2015, 2017; Wolfe and Riggs, 2017] are likely going to be key to developing recruitment and retention strategies for implementation at the individual and programmatic levels.

What Can I Do?

Geoscience education research on these and other grand challenges identified in the framework will strengthen the geosciences as a whole by feeding back into what and how we teach. This goal has the underlying assumptions that research results are effectively shared with educators and are used to reform teaching practice. These actions cannot be left to chance—they will require expanding and sustaining dialogue between educators and researchers and increasing support for GER across programs and departments.

AGU members must be part of this effort. At the individual level, talk with your colleagues who do GER about questions on teaching and learning that matter to you. Invite geoscience education researchers to your department as seminar speakers. At the society level, the dialogue can be scaled up by hosting forums where educators can pose questions and talk directly to GER experts and where GER experts can ask questions of educators.

One online model for this is the Research + Practice Collaboratory, an organization that experiments with ways to support mutual cultural exchange between researchers and practitioners. Another model, which can be embedded in conferences, is the Geoscience Education Research and Practice Forum.

In addition, AGU publications can invite researchers to submit short summaries of new research findings and their practical implications for teaching topics that align to that journal’s focus. Geoscience teaching excellence is a shared goal of geoscience educators and geoscience education researchers, and now is the perfect time to engage in a disciplinary movement forward.


A Community Framework for Geoscience Education Research was supported by National Science Foundation grant DUE-1708228. With 48 authors, the framework was truly a collaborative effort. We invite readers to explore the framework in more depth online from the National Association of Geoscience Teachers. Alternatively, the complete framework, as well as individual chapters, can be downloaded from the James Madison University Library Scholarly Commons.

Red and Green Aurora Stop and Go for Different Reasons

Fri, 07/05/2019 - 11:30

Few studies have yet explicitly connected observed auroral forms to satellite measurements of the precipitating particles and fields. In particular, Liang et al. [2019] have now distinguished the red line and green line aurora, which is new and noteworthy. This identification provides direct evidence that the lower-energy electrons associated with broadband (or Alfvenic) acceleration preferentially produce red-line emission.

With these joint optical and e-POP (Enhanced-Polar-Outflow-Probe) particle observations of Alfvenic auroras, the authors show that the 630 nm red-line auroras evolve distinctly from those seen on other optical instruments and are associated with low-energy precipitation bursts. These precipitation bursts are often the result of suprathermal electron bursts, which are characterized by a broad energy spectrum and a field-aligned pitch angle distribution.

Citation: Liang, J., Shen, Y., Knudsen, D., Spanswick, E., Burchill, J., & Donovan, E. [2019]. e‐POP and red line optical observations of Alfvénic auroras. Journal of Geophysical Research: Space Physics, 124. https://doi.org/10.1029/2019JA026679

—Viviane Pierrard, Editor, JGR: Space Physics

AGU Has a Story to Tell

Fri, 07/05/2019 - 11:27

Everyone has a story tell. From books to movies to conversations around the dinner table, we are exposed to stories every day, even if we don’t consciously realize it. But more and more folks are realizing the value and power of storytelling, especially those in the scientific community.

Storytelling Is the New “It” Thing in Communication

Humans have been using storytelling since the dawn of communication. It’s a universal trait. Storytelling is special because it has a biological component: Stories can connect people on a neurological level, connecting us to memories of certain times, places, and people [Shree, 2015; Liu et al., 2017].

In contemporary culture, storytelling is having its moment in the spotlight. Publications like the Atlantic and the New York Times have devoted pieces to the history and structure behind telling stories. Personal storytelling organizations like The Moth are experiencing continued growth around the country. And recently, a lot of stock has been put into the value of storytelling as a communication tool not just for entertainment but also for news and commercialism.

As a result, the definition of storytelling is highly variable. In the corporate world, it can mean telling personal stories of business success to inspire employees and build trust with customers, partners, investors, or other stakeholders [Hutchinson, 2018]. In entertainment, companies like Pixar have developed principles that guide their storytelling in a way that elicits specific feelings from their audience. Even in the science community, where the goals can vary from translating complex scientific results to solicitations that implore the listener to act, as with climate change, storytelling can take many different forms.

What Are the Pieces of a Story?

A story is more than an arc alone.Like most forms of communication, we can break storytelling down into its basic parts.

The basic structure of a story is an arc. Borrowing from Freytag’s pyramid [Freytag, 1896], which Gustav Freytag constructed to outline the plot structure of a drama, a story has the following basic parts: introduction of the setting, inciting incident (i.e., something happens!), rising action (obstacle, obstacle, obstacle), climax, falling action, and, finally, resolution (Figure 1). The rising and falling are your pyramid shape, or, as it’s commonly referred to, the three-act arc.

Fig. 1. The story arc. Note the different steps and the change from beginning to end. Credit: Olivia V. Ambrogio

A story is more than an arc alone. Stories should be interesting; they should captivate their audiences. Stories should have people, or a personal interest, to focus on. This focus allows the audience to have a vested interest in some component of the story. There should be suspense, tension, mystery, or intrigue and a protagonist that develops alongside the introduction of these elements. As the story progresses, these changes cause the audience to ask what’s next and keep them on the edges of their seats.

A good example of an effective storyteller in the scientific realm is Rachel Carson. Carson was an ecologist and is most well known for her book Silent Spring, which describes how pesticides such as DDT were harming people and wildlife. In the introduction of her book, she doesn’t start with the scientific details, outlining the chemical composition of pesticides. She starts with a story that she calls “A Fable for Tomorrow”:

There was once a town in the heart of America where all life seemed to live in harmony with its surroundings….Then a strange blight crept over the area and everything began to change. Some evil spell had settled on the community: mysterious maladies swept the flocks of chickens; the cattle and sheep sickened and died. Everywhere was a shadow of death. The farmers spoke of much illness among their families….Children…would be stricken suddenly while at play and die within a few hours.

By telling a story, Carson is able to reel in her audience, eliciting from them the question that all scientists ask: “Why?”

A story should contain elements that cause the audience to become emotionally invested in the story and make them able to imagine the sights, sounds, tastes, and smells of what’s going on, as if it’s happening to them.

Each of these techniques pulls the audience along the arc of the story, leading them to the conclusion the teller intends to impart.

How Is AGU Telling Stories?

For years, AGU’s Sharing Science Program has taught scientists about the value of storytelling when communicating their science and its value  through workshops, webinars, online content, and more.AGU has taken a deep dive into storytelling. For years, AGU’s Sharing Science Program has taught scientists about the value of storytelling when communicating their science and its value  through workshops, webinars, online content, and more. Recently, more teams at AGU have recognized the value of talking science via stories.

In November 2017, AGU released the first episode of its podcast Third Pod from the Sun. The podcast features scientists discussing the methods behind their work. The tagline says it all: “These are stories you won’t read in a manuscript or hear in a lecture.”

Since then, Third Pod has released over 35 episodes featuring stories about pouring lava in a parking lot in Syracuse, N.Y., the secret (and sometimes scandalous) lives of tide gauge operators, and pollution in India that turns water buffalos pink. In celebration of AGU’s Centennial, our Third Pod team has covered historical topics such as the discovery of the ozone hole and an abandoned military base leeching toxic chemicals into the environment. But most of all, they’ve featured people—scientists and advocates who help bring a human face to science. By talking to people like photographer James Balog about his unlikely path to becoming a climate advocate and researcher Renata Netto about the challenges she’s faced as a woman in science, the podcast team has focused on humanizing science and scientists through stories.

AGU has also partnered with storytelling organizations to further its science storytelling reach. At AGU’s Fall Meetings 2017 and 2018, the Sharing Science Program was delighted to host the science storytelling organization the Story Collider, which brought scientists and journalists on stage to tell true, personal stories to a live audience. The Story Collider presents these shows around the world to demonstrate that everyone has a story about science no matter their background, profession, gender, ethnicity, or other characteristic.

We’ve also launched the AGU Narratives project as part of our Centennial. The goal of Narratives is to “connect the Earth and space science community, amplify our voice, and inspire those around us” through storytelling.  Narratives encourages AGU scientists to share personal stories about what drives and inspires them and to reflect on the value and impact of Earth and space science.

Storytelling at AGU happens via several media. In addition to audio (Third Pod, Story Collider, Narratives) and written (Eos) forms, AGU members can also tell stories via photos and videos through Tumblr and Instagram. On Tumblr, scientists are encouraged to submit Postcards from the Field to show off their field sites with a short accompanying note.

We also ask members to take over our AGU Instagram account for a few days at a time in a “rotating curator” format that’s becoming increasingly popular on social media. AGU encourages participants to share their lab, field, or outreach photos and videos to show what goes into doing research and being a scientist. Similar to Third Pod from the Sun, Instagram allows scientists to share the fun parts or behind the scenes tasks that don’t always make it into a manuscript or lecture.

What Next?

Storytelling in science will continue to get more popular, and those who choose to communicate science through storytelling should be familiar with the basic tenets of what makes a story and to whom they want to tell that story. If you’re looking for inspiration or advice, AGU is a great place to start.

Hotness and Coldness Indexes Based on the Fahrenheit Scale

Fri, 07/05/2019 - 11:24

As most people know, Phoenix is hot in summer. In fact, there are about 100 days per year where the temperature in Arizona’s capital city is 100°F or hotter.

This curious observation is reminiscent of the way scientists measure the impact of their publications, the so-called h-index. A scientist has an -index of h if he or she has h publications that have been cited h or more times, for example, Michael O’Keeffe has h = 109.

This analogy suggests that cities could be assigned a hotness index, or H-index, defined as the number of days H that the daily maximum Fahrenheit temperature is H or above.

Undergraduate Student Project

Although we are not weather scientists, finding the H-indexes of cities made an interesting undergraduate student project for Christopher Ramirez, a biophysics major.

Phoenix had an H-index of 102 in 2018: There were 102 days for which the maximum daily temperature was at least 102°F.The project required learning some Python programming, locating a suitable database of daily maximum temperatures (the National Oceanic and Atmospheric Administration (NOAA) database was appropriate for U.S. cities and for participating locations around the world), deciphering the database structure, and then patiently sifting through the data.

By the end of the semester, with all of the debugging headaches behind us, we learned that in 2018, Phoenix had 102 days for which the maximum daily temperature was at least 102°F—an H-index of 102.

Of course, we had to compare Phoenix with other cities in Arizona: Tucson was 97, Flagstaff was 79, and Yuma was 103. Death Valley, Calif., was an impressive 109, although, mercifully, there is not a city of 4.5 million people trying to live there. When one lives in Phoenix, it is hard not to feel a little bit competitive about H-indexes, and there is an urge to make a comparison with other cities in the United States and around the world.

This graph displays the 10-year average H-indexes for major cities in the United States. Values are rounded to the nearest integer. Death Valley, not a major city, is included for comparison. Credit: Christopher Ramirez

It is an interesting coincidence that the Fahrenheit scale is close to optimal for this kind of hotness measurement and is intuitively easy to understand. The Celsius scale and, certainly, the Kelvin scale do not lend themselves favorably to this kind of measure.

For a meaningful H-index to be measured, it is important that there be a continuous recording of maximum daily temperatures. In the NOAA database, airports tended to provide the most complete data. For example, at Sky Harbor International Airport in Phoenix, a clear trend shows that the H-index has been increasing by 1 about every 20 years since 1950.

Phoenix has grown significantly since then, and the replacement of farmland with concrete- and asphalt-paved structures may be a significant contributor in the rise of the H-index there: the so-called urban heat island effect. Most other locations show a similar rise.

This graph displays the H-indexes since 1950 for select international airports. Credit: Christopher Ramirez

The ranked distribution of H-indexes of cities presents an S-shaped profile. The upper and lower bounds represent the limits that people are willing to endure. Riyadh, Saudi Arabia, has the highest H-index of the major cities we examined, with an average H-index of 105.

This graph displays the average H-indexes for major cities and locations of interest around the world. Credit: Christopher Ramirez

A similar coldness index, a C-index, can be constructed by referencing the number of days below the freezing point of water. The Celsius scale may be more intuitive for this measure, but the smaller degree step size of the Fahrenheit scale provides more sensitivity. Thus, the C-index is the number of days C at which the Fahrenheit temperature relative to the freezing point of water, Tmin − 32°F, is −C or less.

Sheremetyevo Alexander S. Pushkin International Airport, which serves Moscow, Russia, has a typical C-index of 28, meaning that there are typically 28 days in the year at which Tmin is 4°F or lower, that is, 28 days where the daily lowest Fahrenheit temperature is 28°F or more below the freezing point of water. Phoenix Sky Harbor Airport has a typical C-index of zero, meaning that it rarely freezes there. The suburbs of Phoenix can have a C-index of about 2 (i.e., 2 days where the lowest temperature was 30°F or below), providing more evidence of the urban heat island effect.

Examining the H– and C-indexes can quickly reveal interesting details about locations around the world. For example, Edinburgh, Scotland, has a lower H-index than Moscow (65 versus 72), and Moscow also has a significantly higher average C-index, 28 versus Edinburgh’s 5. We can see quickly that Edinburgh experiences a smaller temperature range than Moscow.

Instead of asking how hot or cold, these indexes ask “How consistent?”Ramirez states that both the H-index and C-index are an interesting, and possibly more relevant, method of representing the temperature component of a location’s climate compared with more traditional techniques. Instead of asking how hot or cold, these indexes ask “How consistent?” When paired with the average temperature, these indexes can provide a more complete picture of the temperature of a location over time.

—Michael M. J. Treacy (treacy@asu.edu) and Christopher N. Ramirez, Department of Physics, Arizona State University, Tempe; and Michael O’Keeffe, School of Molecular Sciences, Arizona State University, Tempe

5 July 2019: This article has been updated to insert the correct graph showing the H-indexes of select world airports.

How Diverse Observations Improve Groundwater Models

Fri, 07/05/2019 - 11:23

Groundwater may be out of human sight, but it is a vital part of the hydrologic system. One way in which scientists seek to understand the hidden processes in the subsurface is to use computer-based models that apply the laws of physics to simulate groundwater flow. A review article published in Reviews of Geophysics explores the benefit of diverse types of observations that can be used to calibrate such models. Here the authors explain some of the field-based and theoretical developments in this field.

What is the purpose of groundwater flow models?

In most cases, the primary purpose of numerical groundwater flow models is providing robust predictions of the future behavior of groundwater. Such predictions are vital for a multitude of groundwater management questions, including the provision of safe drinking water or limiting the distribution of contaminants such as micro-pollutants, waterborne pathogens and antimicrobial resistance genes (Berendonk et al. [2015]).

Groundwater flow models are key tools for management, policy and research.The protection of groundwater dependent ecosystems can also be facilitated through groundwater flow models. Groundwater flow models are thus key tools for management, policy and research.

What are some of the different types of groundwater flow models?

Groundwater flow models vary greatly in their complexity. A simple model might, for example, simulate the flow of groundwater towards a drinking water well, while a complex model can simulate coupled groundwater and surface water flow across vastly different spatial and temporal scales.

With the latest generation of integrated flow simulators, it is now possible to simulate fully coupled groundwater and surface water systems under consideration of flow, heat and mass transport. Integrated flow models are key for sustainable basin-wide and cross-boundary management of water resources and policy making.

Why is it necessary to calibrate groundwater flow models?

Natural groundwater-surface water systems are very complex and feature heterogeneous physical properties. Our ability to include this complexity in models is limited.

Groundwater flow models need to be calibrated against different hydrological observations.Even where field-based measurements of subsurface properties are available, the complex spatial distributions of these properties are never known. Groundwater flow models, whether they are simple or complex, therefore, always need to be calibrated against different hydrological observations in order to produce robust predictions.

Which hydrological observations have traditionally been used for groundwater flow model calibration?

Observations of groundwater levels and surface water discharge are usually used to calibrate flow models – and for good reason: they can be obtained quickly in large numbers, for a small cost, and at high precision. While they do contain valuable information, from research on groundwater flow modelling done during the past three decades we now know that using only these two classical observation types does not allow the unknown hydraulic properties of groundwater-surface water systems to be sufficiently constrained and, consequently, produces model predictions with considerable uncertainty.

Why is the calibration of models through observations challenging in the case of groundwater flow?

Two aspects of groundwater flow systems create challenges for groundwater flow model calibration. On the one hand, groundwater flow systems are characterized by spatial and temporal heterogeneity in both structural properties as well as the forcings that drive groundwater flow. The hydraulic conductivity of the subsurface, for example, can vary by many orders of magnitude on the scale of just a few meters, while forcings and states such as groundwater recharge, groundwater levels, and groundwater discharge can vary significantly within short periods of time in response to rainfall or snowmelt events, for example.

On the other hand, many of the physical equations which govern groundwater and surface water flow are non-linear. Generally, the more processes a groundwater flow model simulates, the more complex the equations which need to be solved become. As a consequence, more parameters have to be calibrated. Depending on the observation types used for flow model calibration, additional processes need to be considered in the flow model: for example, if observations of tracer concentrations are to be used for calibration, the flow model needs to include the capability of simulating mass transport. This, in turn, increases the number of parameters which need to be calibrated.

There is a trade-off between the increased complexity which a flow model needs to address and the benefit of additional observation types.While it is often beneficial to include additional observation types, too many additional processes and parameters may result in increased uncertainty because of the increased number of calibration parameters. There thus exists a trade-off between the increased complexity which a flow model needs to address and the benefit of additional observation types. Finding the sweet spot of model complexity is challenging and depends on the specific purpose of each model.

What developments in measurement techniques have enabled more accurate observations for flow model calibration?

Developments in hydrological tracer techniques are of particular note. Stable water isotope analysis is now a standard tool employed in many hydrogeological investigations (Jasechko [2019]). High-resolution measurement technologies such as atom trap traces or low level counting now allow concentrations of the ultra-rare radioactive isotopes of Krypton (81Kr and 85Kr) or of Argon (37Ar and 39Ar) in groundwater to be differentiated from atmospheric background concentrations, filling important gaps in the groundwater residence time analysis toolbox (Loosli and Purtschert [2005]; Schilling et al. [2017]).

Moreover, portable measurement technologies such as portable mass spectrometers now allow simultaneous measurements of inert and reactive gases (He, Ar, Ne, Kr, N2, H2, CO2, O2, CH4) dissolved in groundwater, on-site, in near real-time and for a fraction of the cost of laboratory-based technologies (Brennwald et al. [2016]). While these on-site technologies are lower in precision compared to the high-resolution laboratory-based technologies, the ability to measure dissolved gas time series on-site and in a spatially distributed manner provides insights into the dynamics of groundwater-surface systems beyond any existing technology.

Significant advances have likewise been achieved in airborne technologies such as remote sensing and using unmanned aerial vehicles (Brunner et al. [2017]; Tang et al. [2018]): These have enabled, for example, high-resolution digital terrain models and spatially distributed observations of water temperatures, evapotranspiration, soil moisture or groundwater storage variations to be obtained even in very remote regions of the world.

Meanwhile, geophysical techniques such as ground penetrating radar or electrical resistivity tomography allow structures and hydraulic properties of the subsurface to be inferred in a non-intrusive way, dramatically reducing the necessity for large numbers of expensive boreholes to be drilled.

Recent advances in measurement technologies have therefore increased both direct information on the underlying hydraulic properties of groundwater flow systems as well as information on hydrologic system states, both of which are key information for groundwater model construction. Our review assesses the benefit of using diverse observations for groundwater flow model calibration, focusing on non-classical observation types.

The development of portable tracer technologies such as the gas-equilibrium membrane-inlet portable mass spectrometer (GE-MIMS), which allows measuring dissolved gases on-site, has greatly increased the availability of diverse hydrogeological observations. Observations such as spring discharge and spring water origins derived from these novel tracer technologies can greatly improve the robustness of groundwater flow model calibration. The GE-MIMS is shown in action here at a large groundwater spring at the foot of Mount Fuji, Japan. Credit: Oliver Schilling

What are some of the unresolved questions where additional research, data or modeling is needed?

Current challenges revolve around model complexity…including how to choose the right degree of model complexity.Important current challenges around groundwater flow modeling revolve around model complexity. While including diverse observation types is generally beneficial for flow model calibration, research is still needed into how to choose the right degree of model complexity, which optimally balances the gain of information through the inclusion of additional observation types versus the increase of unknowns which the simulation of additional processes introduces.

Another challenge is coping with the high computational demand of fully integrated groundwater-surface water flow models.Another challenge related to model complexity is coping with the high computational demand of fully integrated groundwater-surface water flow models. While integrated flow models have the potential to result in the most insights into the behavior of real-world flow systems, they are computationally so demanding that their application to large spatial and temporal scales or to operational forecasting and controlling is still limited. Research is needed to make fully integrated flow simulators more accessible and more efficient. One interesting current research direction is running flow simulators on now widely-available computational cloud infrastructure (e.g., Kurtz et al. [2017]).

While our review focuses on ways to improve the robustness of models, it is the real-world applications that drive this work. Improved predictions of groundwater flow will serve as a basis for more robust and more sustainable water resources management, including optimizing the abstraction of groundwater for drinking water, maximizing yield while minimizing negative economic and ecological impacts and limiting the propagation of contaminants through hydrologic systems.

—Oliver S. Schilling (email: oliver.schilling@flinders.edu.au) and Peter Cook, Flinders University, Australia; and Philip Brunner, University of Neuchâtel, Switzerland

Tracking Earth’s Shape Reveals Greater Polar Ice Loss

Fri, 07/05/2019 - 11:22

Earth may be called the “Blue Marble,” but it is not a perfect sphere. The planet is slightly flattened at the poles because of its rotation, and this flattening has a large effect on Earth’s gravity field. The flattening, or oblateness, can change as Earth’s crust sinks or rises according to the weight of ice sheets resting on its surface or as water from melting polar ice sheets enters the ocean.

In 2002, NASA and the German Aerospace Center launched the Gravity Recovery and Climate Experiment, or GRACE (and later the follow-on mission GRACE-FO), to track anomalies in Earth’s gravitational field and monitor the mass of ice sheets and ocean waters. But one key issue with GRACE was quickly identified. The oblateness measurements were off, leading to errors when calculating mass changes.

Organizations around the world have proposed ways to correct GRACE’s measurements. In a new study, Loomis et al. analyze existing methods and propose a solution of their own, integrating GRACE’s gravity anomaly data with a technique called satellite laser ranging (SLR).

With SLR, scientists send a laser pulse to a satellite, which reflects it back to Earth. By measuring the time the light pulse takes to return, they can precisely calculate the distance it traveled and gain valuable information about how Earth’s gravity field affects the motion of orbiting satellites. Other solutions have also integrated SLR, but this study explored various SLR data processing techniques to obtain the most accurate oblateness measurements and therefore the most accurate mass calculations.

More accurate estimates mean that ice melt at the poles lines up with observed sea level rise—every part of the global sea level budget is accounted for. The researchers discovered that there has been greater ice mass loss at both poles than was previously thought. Improving the accuracy of our mass change observations is critical for improving our models and advancing our understanding of our changing planet. (Geophysical Research Letters, https://doi.org/10.1029/2019GL082929, 2019)

—Elizabeth Thompson, Freelance Writer

Antibiotics Are Flooding Earth’s Rivers

Fri, 07/05/2019 - 11:22

In a photograph, steam rises from a pile of garbage that sits by a river. The river is the Odaw, in Ghana, near the country’s capital city of Accra, and in the photo the Odaw’s water is black.

“They refer to it as a dead river.”“They refer to it as a dead river,” said Alistair Boxall, who is an environmental chemist at the University of York in the United Kingdom.

The person who took the photo is ecologist Robert Marchant, also of the University of York. Marchant was at the river to take a sample of the water as part of a global campaign, led by Boxall, to study concentrations of pharmaceuticals in the world’s rivers. Boxall, along with his colleague John Wilkinson, coordinated an international team that sampled rivers like the Odaw.

“We wanted to try and get a better understanding of what the levels of pharmaceuticals, including antibiotics, were around the globe,” Boxall said.

Boxall and Wilkinson sent their collaborators a box that contained sample vials, syringes, filters, and freezer packs. The collaborators sampled 711 river locations in 72 countries and then sent their samples back to Wilkinson and Boxall for analysis.

What Wilkinson and Boxall found is that 470 of those sites contained antibiotics, which come from sources including human excrement and drug manufacturing activity. Many of these antibiotics occur at concentrations above what the Antimicrobial Resistance (AMR) Industry Alliance—a group of private sector companies that aims to address the threat of antibiotic-resistant bacteria—says is safe. Here “safe” refers to those levels above which the alliance says bacteria can start to develop antibiotic resistance. According to Boxall, those levels can range anywhere from 20 to 32,000 nanograms per liter of water, depending on the antibiotic.

Doctors use antibiotics to treat a raft of bacteria-caused ailments, from tuberculosis to staph infections. Some bacteria can become antibiotic resistant when exposed to the drugs, which can make treatment next to impossible for doctors. In the United States alone, according to the Centers for Disease Control and Prevention, 23,000 people die each year from antibiotic-resistant bacterial infections.

The Kirtonkhola River in the city of Barishal, Bangladesh, was one of the sites where samples were collected for the Global Monitoring of Pharmaceuticals Project. Credit: Tapos Kormoker

At Ghana’s Odaw River, concentrations of antibiotics like metronidazole, used to treat things like skin and mouth infections, exceeded safe levels by a factor of 68. The Odaw, though, is not the worst-off river. More than 110 of the 711 sampled sites have concentrations that exceed safe levels by factors of up to 300. Rivers in Bangladesh, where concentrations hover around 40,000 nanograms per liter, are among the worst of that group.

“People are using these rivers to clean in, clean their clothes in. They’re sourcing their water from those sites,” said Boxall, who presented results from the campaign earlier this month at the Canadian Chemistry Conference and Exhibition in Quebec City. This means that people who use the rivers stand a greater chance of exposing themselves to the resistant bacteria.

The campaign helps address the threat that such bacteria pose by revealing what the prevalence of pharmaceuticals is in rivers around the world—something that should help prioritize which drugs merit the most attention when it comes to cleanup efforts.

Acquiring such data, however, would not have been possible without the global, standardized campaign conducted by Boxall and Wilkinson, according to environmental engineer Viviane Yargeau of McGill University in Canada, who was not involved in the study. “You know that you’re comparing apples and apples,” she said.

—Lucas Joel, Freelance Journalist

Shining a Spotlight on LGBTQ+ Visibility in STEM

Wed, 07/03/2019 - 12:09

When assistant professor Lisa Graumlich told a university administrator in the 1980s about her partner, the administrator gave her a piece of advice: “She said, ‘It’s OK to be gay, Lisa, but don’t let anyone find out,’” Graumlich recalled. The administrator held a senior role at the university where Graumlich hoped to get tenure.

Three decades later, science undergraduate Rob Ulrich grew increasingly isolated working in a laboratory in a small town, where he struggled to find people whom he felt connected with in the sciences.

“I got to that point where I was so lonely and so isolated,” Ulrich said. He almost considered leaving science and recalled thinking, “I need to get out of here.”

Lesbian, gay, bisexual, transgender, queer, intersex, and asexual (LGBTQ+) individuals continue to cope with inhospitable work environments, despite progress in public support for LGBTQ+ people over the past several decades. Scientists face workplace discrimination, lack of health services and infrastructure, harassment and assault, and other challenges, all of which can be compounded if they are part of other marginalized groups.

Yet according to a report released by the Institute of Physics, Royal Astronomical Society, and Royal Society of Chemistry in June 2019, one of the most basic needs of LGBTQ+ scientists is to be seen and respected for who they are. Increasing visibility is a critical area of action listed in the report.

In honor of the second annual LGBTSTEM Day on 5 July 2019, Eos dives into the issues of visibility for people of sexual and gender minorities in the sciences. How does being open at work affect scientists’ ability to perform? How does visibility affect young scientists’ growth? And how are LGBTQ+ people still missing from our data sets?

https://eos.org/wp-content/uploads/2019/07/lgbt_stemday_final.mp4 To Be Seen in the Workplace Credit: Karla Martinez/EyeEm/Getty Images

A person’s identity includes both visible and invisible aspects. Individuals with invisible and stigmatized identities, like people belonging to sexual and gender minorities, must repeatedly choose whether to reveal themselves to others in their daily life.

Choosing to be openly LGBTQ+ in the workplace can be particularly difficult in the sciences, where personal details are often thought to take a backseat to the pursuit of objective scientific truths.

Graumlich, who now serves as the dean of the College of the Environment at the University of Washington, said that building a welcoming workplace for LGBTQ+ scientists is not an abstract construct: She calls it central to science.

When Graumlich heeded the advice of her superior and hid her life from her colleagues as an assistant professor, she grew exhausted. As she withdrew into herself, dodging colleagues’ questions and staying silent during offensive jokes, she watched her scientific creativity deteriorate and her publishing rates slow.

Frustrated, Graumlich told her colleagues the truth. Soon after, her creativity rushed back to her, breathing new life into her work and helping her find her stride again while writing papers and applying for grants.

“If we don’t bring our whole selves into our life and work as a scientist, something is missing.”“In retrospect, I realized what a toll this had taken on me,” she said. “To me, the whole thing that I learned was that if we don’t bring our whole selves into our life and work as a scientist, something is missing.”

Although data for science, technology, engineering, and mathematics (STEM) professionals are sparse, a 2013 online survey of 1,427 scientists found that 43% kept their LGBTQ+ identity hidden from the majority of their colleagues, even though research suggests that concealing one’s identity can lead to isolation, difficulties maintaining close relationships, and social avoidance. People in the disciplines of Earth sciences, engineering, mathematics, and psychology reported that they were less open to their colleagues than those in the life sciences, social sciences, and physical sciences. People working in fields with a higher percentage of female scientists reported higher levels of openness at work.

There is ample reason for LGBTQ+ people to keep themselves hidden: In the United States, individuals can be fired on the basis of their sexual orientation in 28 states, and gender minorities lack protection in most states as well. Even in states with workplace protections, sexual and gender minorities face prejudice. Worldwide, support for LGBTQ+ people varies, and people from marginalized communities may face increased risks talking openly with their colleagues.

For some, sharing their personal lives at work may seem irrelevant, which Graumlich can understand. But for increasingly cross disciplinary research on complex systems in the Earth and space sciences, Graumlich said, scientists must be able to not only perform at their best but work comfortably in teams and trust one another.

“We need science and scientists to be as productive and creative and connected to each other as possible,” she said. “If we don’t feel that we can be out, then science is suffering.”

To See Ourselves in Others

For scientists at the dawn of their career, having role models can transform their picture of what’s possible. The recent initiative Queer Science at the University of Minnesota brings queer and transgender high school students into laboratories for hands-on activities. But scientists at all career levels can benefit from mentorship.

During the process of applying to doctoral programs,  Shayle Matsuda found solace in a 1-hour meeting with a senior scientist in a different field who was also transgender.

“That one conversation was extremely pivotal for me,” said Matsuda, now a Ph.D. candidate at the Hawaii Institute of Marine Biology.

The pink and blue transgender flag flies above an office building. Credit: Flickr.com/Foreign and Commonwealth Office, CC BY 2.0

“Being in an office of someone who is so successful who had gone through the same things that I had gone through” was meaningful, said Matsuda. “That moment was really powerful and really important for me in finding myself and that I belong [in STEM].”

Despite the power of mentorship, not all students are able to find support.

As Rob Ulrich worked as an undergraduate in a laboratory at his institution, he struggled to feel less alone in science. “It was sad because I loved the research there,” he said, but he couldn’t wait to finish his degree and “get out.”

Ulrich wasn’t sure if he wanted to attend graduate school but decided to apply to programs just in case, using the locations of cities that openly welcome LGBTQ+ communities as his number one priority. When he received an offer from the University of California, Los Angeles, he decided to stay in the sciences.

“I honestly don’t know if I would have gone to graduate school had I not gotten into a school that was in such a great location for queer people,” he said.

A 2018 analysis of undergraduate students across 78 different U.S. institutions found that lesbian, gay, bisexual, and queer students were 7% less likely to stay in a science than their heterosexual peers, and men were particularly at risk of leaving science. Nearly half of all transgender scientists surveyed in the physical sciences have considered leaving their workplace, according to a separate analysis.

Bryce Hughes, the author of the 2018 study, told Eos that the most surprising finding was that students of sexual minorities still quit science even after participating in undergraduate research, one of the key retention tools for STEM.

“Undergraduate research is almost upheld as the panacea for keeping people in the science field,” Hughes said. Yet sexual minority students, who were 10% more likely to participate in undergraduate research, didn’t finish their STEM degree. The findings suggest that motivated and trained STEM students opt not to continue.

Invisible Data Points

While LGBTQ+ scientists push for recognition and belonging in the workplace, education researchers labor for more available data on the experience of LGBTQ+ scientists.While LGBTQ+ scientists push for recognition and belonging in the workplace, education researchers labor for more available data on the experience of LGBTQ+ scientists.

Without proof that LGBTQ+ scientists are underserved or underrepresented, education researchers struggle to win grants from funding agencies to study LGBTQ+ retention and support. Researchers are stuck in a catch-22: needing more data but also needing more funding to provide that data, said Hughes.

To rally more data, 17 scientific associations and organizations wrote a letter to the National Science Foundation (NSF) in August 2018 requesting that the organization include sexual orientation and gender identity on their annual surveys of undergraduate and graduate students. The NSF obliged their request and agreed to add pilot questions in the future, most likely to their 2021 doctoral survey.

Another path to obtaining more data, Hughes said, is defining gender identity and sexual orientation as federally protected classes. Federal legislation would safeguard workers and drive organizations to collect LGBTQ+ data, just as they do for other protected classes like age, sex, race, disability, and veteran status. The Equality Act, passed by the U.S. House of Representatives in May 2019 and awaiting a Senate vote, is the most recent effort to ensure LGBTQ+ protection in law.

“If the Equality Act went through,” said Hughes, “that would impel a lot of organizations to then collect that data.”

Credit: davidf/E+/Getty Images

Some researchers are taking matters into their own hands, like the grassroots initiative Queer in STEM. The group of researchers in education and STEM fields conducted two online surveys, the most recent of which included 3,200 respondents and is still being analyzed. One of the researchers, Allison Mattheis, said that their data set stood out from others in that it included a wealth of data on gender minorities.

“From my knowledge, we have the largest percentage of respondents who identified themselves as trans or gender nonconforming or gender queer,” Mattheis said of the second survey. “We are trying to include those voices in our research.”

Online Communities Leading the Way

While researchers still push for visibility in the sciences, Graumlich reflects that progress has been made, particularly as young scientists present openly and proudly at work and online. On the first LGBTSTEM Day last year, the event’s hashtag was tweeted more than 16,000 times and spanned many countries.

“It’s a pretty stark contrast with what my experience was in my early career,” Graumlich said.

For Ulrich, he credits the online database 500 Queer Scientists as one way he’s found community in graduate school. The website collects stories from scientists to share on their website alongside pictures and personal vignettes. Started 1 year ago, the site has over 900 profiles.

There are actually more than 900 participants in the online 500 Queer Scientists database. Credit: 500 Queer Scientists

Ulrich called the database instrumental in building a network of LGBTQ+ scientists all over the world.

“You’re instantly plugged into this online network of people,” he said. For other scientists looking to find community, he recommends looking online in addition to local options. “Once you do find us,” Ulrich said, “it’s so inclusive and welcoming.”

In a broader sense, Ulrich hopes that striving for equal opportunities for LGBTQ+ scientists will help STEM remake the mold of whom a scientist can be.

“We also need to break that stereotype that STEM is only objective and only your work matters,” Ulrich said. “Scientists and engineers are people too.”

—Jenessa Duncombe (@jrdscience), News Writing and Production Fellow

Limiting Factor Was a Science Opportunity for a Deep-Sea Geologist

Wed, 07/03/2019 - 12:07

When Dallas, Texas, businessman and extreme explorer Victor Vescovo set a new depth record in May, diving to 10,928 meters in the Challenger Deep area of the Mariana Trench in his submersible Limiting Factor, news outlets around the world covered the feat (along with his discovery of what appeared to be a plastic bag on the ocean floor).

But behind the headlines was a scientist: Patricia Fryer, a geologist with the Hawai‘i Institute of Geophysics and Planetology at the University of Hawai‘i at Mānoa. A deep-ocean researcher with more than 40 years of experience working around the Mariana Trench, Fryer was asked to serve as Limiting Factor’s science adviser because of her expertise in the region. She hoped the expedition would be an opportunity to collect samples from one of Earth’s most extreme environments.

“We want to know what the limits of life are. Here we have a region in which we have some of the most challenging environments on the planet.”“We want to know what the limits of life are,” Fryer says. “Here we have a region in which we have some of the most challenging environments on the planet.”

Fryer studies the environments around plate boundaries on the ocean floor. On previous expeditions, she’s worked with remotely operated vehicles to sample sediments and fluids released by a downgoing plate in a subduction zone, subjected to increasing pressure and temperatures as it descends into the mantle.

Fryer has been able to collect samples from seeps on the sides of seamounts, but, she says, “we didn’t have any tools to go to the deeper parts of the trench, at the greatest depths.”

Trouble Retrieving Samples

With four dives planned to the Challenger Deep and one to the Sirena Deep, the second-deepest part of the Mariana Trench, Fryer hoped Vescovo would be able to bring back samples from seeps much farther down, below 6,500 meters.

“Unfortunately, the manipulator on the sub had some difficulty on the deepest dives, so Victor was unable to get samples with the manipulator,” Fryer says. “Fortunately, however, on one very deep dive, around 10,700 meters or so, he bumped into the seafloor by accident.”

When engineers from Triton Submarines, the company that designed and built Limiting Factor, took the sub apart for maintenance, they found sediment in a battery compartment. Fryer is now studying the material. In addition to being among the deepest samples ever collected, the find is exciting because it comes from an overriding plate; previous samples came from downgoing plates.

Fryer hopes to continue working with Limiting Factor, which is the only submersible certified to travel to any depth within the ocean. “What I would like to see is an appropriate, scientifically eager group purchase the sub and continue exploring with it,” she says.

Fryer herself has experience with the deep, including a dive to 6,498 meters aboard the Japanese Shinkai 6500 submersible in 2008, and says she would love to dive in Limiting Factor.

“I feel very strongly that the human presence brings an important dimension to observation of the seafloor,” Fryer told Eos. “I love remotely operated vehicles and the fact that you can take them almost anywhere. But you can’t sit there and look to the left, and look to the right, and see an animal community, for example, reacting to the approach of a predator. You can look out the portholes of a sub and see the context of what you’re looking at.”

—Ilima Loomis (@iloomis), Freelance Writer

How Cassini Ran Rings Around Saturn and What It Helped Us Learn

Wed, 07/03/2019 - 12:06

For more than a decade, the Cassini spacecraft had perhaps the most spectacular view in the solar system.

Time after time it looped around Saturn, viewing the magnificent rings from many angles. It plotted individual ringlets, spotted waves and ripples, and discovered “propellers” and other odd features embedded in the ring system.

And then the view got better. During its final 22 orbits, Cassini dipped into the space between Saturn’s cloud tops and the inner edge of the rings, passing so close to them that it had to use its radio dish as a shield against ring particles.

This artist’s concept shows Cassini crossing inside Saturn’s ghostly D ring. Credit: NASA/JPL

That perspective produced some impressive science as well as impressive views. Cassini’s observations revealed the mass of the rings and provided better estimates of when they formed and how long they might continue.

“Prior to Cassini’s finale, there were two big unknowns, and those have now been addressed: What is the mass of the rings, and what is the mass-loss rate of the rings,” said Luke Moore, a senior research scientist at Boston University.

From those measurements, researchers concluded that we happen to be seeing Saturn’s rings in the middle of a relatively short life span, which began perhaps 100 million years ago and might last 100 million years longer.

Cassini’s ring discoveries didn’t end there, however. The craft flew through “ring rain”—a shower of particles from the rings into Saturn’s atmosphere—which allowed it to directly measure the rings’ composition. Its instruments also measured the effects of ring rain on Saturn’s atmosphere. And scientists even used the appearance of the rings to deduce the length of Saturn’s day, adding one more accomplishment to Cassini’s reconnaissance of Saturn’s rings.

Becoming “Lord of the Rings” The four main bands of Saturn’s rings are visible in one of Cassini’s final looks at the rings, taken 2 days before its demise. The A ring is the relatively dark outer band, the B ring is the brightest band, the C ring forms a darker region on the inside of the B ring, and the D ring consists of a few tenuous bands closest to Saturn. Credit: NASA/JPL-Caltech/Space Science Institute

Saturn’s main rings span about 275,000 kilometers—roughly two thirds of the distance between Earth and the Moon—although their average thickness is no more than a few tens of meters. From outside to inside, the rings are labeled A, B, C, and D, with the first three consisting of many smaller individual rings. (A few faint, low-mass rings lurk outside the A ring, but they are insignificant compared to the main bands.) A and B are wide and dense, and they contain most of the ring system’s mass.

The C ring helped astronomers solve a perplexing mystery about Saturn: the length of its day, which had been difficult to pin down.

Chris Mankovich, a graduate student at the University of California, Santa Cruz, studied waves in the C ring for his Ph.D. thesis. Those waves are generated by motions of layers several thousand kilometers below Saturn’s cloud tops.

“Mass inside the planet sloshes back and forth, and the rings ‘feel’ that through gravity,” he said.

Ring waves helped scientists calculate Saturn’s rotation rate: 10 hours, 33 minutes, 38 seconds.Cassini detected the waves by measuring stellar occultations by particles in the C ring, which is much less dense than the A and B rings. The exact pattern of the waves reveals the planet’s motions, which Mankovich then used to calculate Saturn’s rotation rate: 10 hours, 33 minutes, 38 seconds.

Most of the ring research, though, was focused on the rings themselves and on their interactions with Saturn’s atmosphere. Cassini passed inside the ghostly D ring, which comes within about 6,500 kilometers of Saturn’s clouds—so close that it’s immersed in the planet’s tenuous outer atmosphere, known as the exosphere.

Cassini’s dips inside the D ring were part of the craft’s grand finale, a 5-month mission phase that ended with Cassini’s demise on 15 September 2017.

This graphic plots Cassini’s final orbits around Saturn. Credit: NASA/JPL-Caltech

As Cassini depleted its propellants, mission planners decided to end its journey by slamming it into Saturn’s atmosphere, where it would burn up. That would eliminate the risk of it hitting the moons Titan and Enceladus, which show evidence of habitability, and possibly contaminating them with terrestrial microbes that might have survived the rigors of space.

Engineers devised a flight path that would take advantage of Cassini’s end to provide new insights into Saturn and the rings. As Cassini spiraled closer to Saturn, it was set to probe the planet’s interior with greater fidelity, get a more detailed look at its cloud tops, and study the rings from a new angle.

Among the grand finale’s most important discoveries, that “inside-out” angle allowed scientists to make the best measurement yet of the mass of Saturn’s rings.

When it remained outside the rings, Cassini was pulled by the combined gravity of Saturn and the ring system, so it was difficult to isolate the tug of just the rings. When Cassini passed inside the rings, however, Saturn pulled the craft in one direction while the rings pulled in another.

Precise radio tracking revealed the rings’ gravitational effect on Cassini’s path, which allowed scientists to calculate their mass: 41% the mass of Saturn’s small moon Mimas (plus or minus 13%), or 0.05% the mass of Earth’s moon—roughly half the value of many pre–grand finale estimates.

“My favorite comment about this research came from a Russian website, which basically said that we found when Saturn became ‘Lord of the Rings.’”When combined with other parameters, the mass allowed scientists to estimate the age of the rings as well.

In a paper in Science, published in January, radio science team members assumed that the rings began as almost pure water ice and have been darkened by meteoroidal material from outside the Saturn system, which falls onto the rings at a well-known rate. The darkness of the rings reveals the ratio of ice to rock, which in turn reveals their age: between 10 million and 100 million years.

“They can’t be older than about a hundred million years because they would be darker,” said Burkhard Militzer, an associate professor of Earth and planetary science at the University of California, Berkeley, and an author on the paper. “That tells us conclusively that the rings are very young. My favorite comment about this research came from a Russian website, which basically said that we found when Saturn became ‘Lord of the Rings.’”

“The age isn’t a solved problem, but it does narrow down the error bars,” said Moore. “It’s consistent with the rings of the other giant planets. They are very different environments, but the fact that we don’t see similarly large systems at Jupiter or the other planets would be consistent with a young age for Saturn’s rings, which, over time, would just sort of evaporate away.”

Ringing in the Rain

Cassini’s observations may also help scientists determine how the rings formed. One idea says they were born when Saturn’s gravity pulled apart a passing comet and the remains encircled the planet, whereas another says they formed from one or more collisions between small moons or between a moon and a comet.

Clues to the origin may come from Cassini’s direct measurements of ring material. As it plunged through the ring plane, the spacecraft detected a surprisingly heavy “rain” of organic-rich neutral particles onto the equator—enough to drain the rings away in a hurry.

Scientists had already detected an infall from the rings at the planet’s midlatitudes, which Moore describes as “classic ring rain” because it was the first ring rainfall to be described. It was first predicted in the 1980s on the basis of Voyager measurements of the ionosphere. Scientists noted an unexpected dip in the charge of the ionosphere at certain latitudes, as well as dark zones in the clouds.

The composition of the ring rain detected by Cassini. Credit: NASA/JPL/SwRI

Micrometeorites may be striking the inner portion of the B ring, creating a plasma. Some of the plasma particles follow Saturn’s magnetic field lines toward the planet, falling into the atmosphere along the observed latitude bands. The particles combine with electrons in the ionosphere, reducing the electron density at those latitudes. The rain also clears out high-altitude haze, allowing us to see deeper into the atmosphere, producing the dark zones.

A study published earlier this year in Icarus, based on a reanalysis of Keck Telescope observations from 2011, confirmed the infall in bands around 45°N and 39°S latitude. The study says that this process is delivering roughly 432 to 2,870 kilograms of water to Saturn’s midlatitudes every second, enough to drain the rings in about 300 million years.

Cassini confirmed the ground-based findings. But it also found that this rainfall may be a mere shower compared to the influx at Saturn’s equator.

The Cosmic Dust Analyzer, for example, found that the region between the D ring and the outer atmosphere is filled with a fairly constant population of grains of water ice and silicates that are a few tens of nanometers across.

The Magnetospheric Imaging Instrument (MIMI), which was designed to measure energetic neutral atoms, ions, and electrons, detected even smaller grains (comparable to the size of smoke particles) at altitudes of about 1,700 to 3,000 kilometers above the planet’s equator.

“There was no expectation of dust right at the equator, so the fact that Cassini found a population concentrated there was quite a surprise,” said Don Mitchell, a staff physicist at the Johns Hopkins University Applied Research Laboratory in Baltimore, Md., and MIMI principal investigator. “We talked to the dust guys, and they didn’t believe us the first three or four times we told them. They eventually came around and realized it is happening.”

The grains probably come from D68, the innermost ringlet in the D ring. Collisions with hydrogen atoms in Saturn’s exosphere strike the dust grains, which then spiral into the planet’s atmosphere in just a few hours. The observed population suggests that about 5 kilograms of these particles enter Saturn’s atmosphere per second.

Breaking Up Isn’t Hard to Do

Perhaps the most intriguing results, though, came from the Ion and Neutral Mass Spectrometer (INMS), an instrument that determined the composition and structure of ions and neutral particles. It was designed to study Titan and Saturn’s magnetosphere, where particles would impact at speeds of about 6 kilometers per second. For the grand finale, however, Cassini reached peak velocities of more than 30 kilometers per second, pushing the instruments in ways scientists had never anticipated.

Particles from Saturn’s upper atmosphere entered an antechamber, then traveled to detectors where they were filtered and counted. At high velocities, larger particles were dashed to bits inside the instrument, so more-complex molecules likely were shattered, breaking them into smaller molecules.

“We were worried about the fragmentation, but it seemed to work out okay in ways we don’t yet completely understand.”“We were worried about the fragmentation, but it seemed to work out okay in ways we don’t yet completely understand,” said J. Hunter Waite, a program director at the Southwest Research Institute in San Antonio, Texas, and principal investigator for INMS. “It complicated things in some ways, but in others it was our best friend, because it allowed us to see some large molecules we couldn’t have seen if they hadn’t broken apart.”

INMS detected a mélange of elements and compounds, many of which were unexpected, including methane (16% of the sample by mass), ammonia, carbon monoxide, carbon dioxide, molecular nitrogen, and the fragments of heavier organic compounds (37%). Almost all of them were falling from the inner D ring to an 8° strip centered on the equator.

“Water is present, but it doesn’t seem to dominate,” said Moore. “That was a big surprise because when you look at the rings spectroscopically, they’re 90-something percent water ice.”

The infalling material may alter Saturn’s atmospheric chemistry. One study, for example, found that an odd overabundance of methane observed in Saturn’s atmosphere could be explained if the current rate of infall had been sustained over the lifetime of the rings. The methane would enter at the equator and then be distributed around the rest of the planet over millions of years.

A Hundred Million Years to Go

Instrument scientists estimated that a total of 4,500 to 48,000 kilograms of material are raining from the D ring into the atmosphere every second. “The D ring would go away in 10,000 to 70,000 years at the rate of infall right now,” said Waite, with the C ring persisting no more than a few million years.

But Cassini detected variations in the rate by both longitude and time, suggesting that the rate isn’t steady, so Cassini might have been studying the rings at a time of unusually heavy infall. In recent decades, in fact, large meteoroids may have fractured upon impact with the rings, supplying fresh material.

“Part of the D ring brightened considerably in 2015,” said Mitchell. “The thinking is that a couple of fairly good sized chunks of material collided or else a meteoroid came in and collided with a chunk of material in the D ring.”

Cassini allowed astronomers to see hundreds of individual ringlets. Credit: NASA/JPL-Caltech/Space Science Institute

In addition, “there are clear indications of episodic transfer of material from the C ring to the D ring,” Waite said. But he said that there are no known ways to transfer material from the A and B rings, which contain the vast majority of the ring system’s mass, to sustain the lighter inner rings.

That balance of infalling ring material versus additions to the ring system from comets and meteoroids leaves the fate of the rings a bit uncertain, although Mitchell said that they probably won’t live that much longer—by astronomical standards.

“Originally, it was estimated that the rings were born four and a half billion years ago and stayed pretty stable,” Mitchell said. “But it looks like they’re fairly recent. And if you calculate all the losses from the rings, they probably won’t be around for more than another hundred million years or so, which is much shorter than the lifetime of the solar system. So that argument is pretty much settled.”

“Perhaps we should appreciate the rings a little bit more the next time we look at them through a telescope,” said Militzer with a chuckle. “This is a special moment—the rings won’t be around forever.”

—Damond Benningfield (damonddb@aol.com), Science Writer

Laboratory Study Probes Triggering Mechanisms of Earthquakes

Wed, 07/03/2019 - 11:30

Changes in normal stress on a fault surface can occur during an earthquake, which may influence the frictional strength of the activated fault. However, the nature of changes in surface friction due to rapid increases or decreases in normal stress is poorly understood. Shreedharan et al. [2019] investigated the effect of changes in normal stress on shear friction in laboratory experiments to better understand this relationship.

Experiments were conducted by sliding blocks of granite against each other, while monitoring the shear stress response and how well the surfaces are in contact during sliding. The contact between the blocks could be monitored in-situ during experiments using ultrasonic waves. A strong correlation was found between the amplitude of ultrasonic waves and the applied normal stress, which the authors used to propose a model that accounts for the effect of normal stress perturbations on the contact behavior between sliding rock surfaces. The amplitude of ultrasonic waves furthermore reflected the development of surface friction during shear after the change in normal stress.

A micromechanical multi-contact model is used to qualitatively explain the evolution of frictional strength as a function of normal stress changes, which helps understand the triggering mechanisms of earthquakes.

Citation: Shreedharan, S., Rivière, J., Bhattacharya, P., & Marone, C. [2019]. Frictional state evolution during normal stress perturbations probed with ultrasonic waves. Journal of Geophysical Research: Solid Earth, 124. https://doi.org/10.1029/2018JB016885

—Bjarne S. G. Almqvist, Associate Editor, JGR: Solid Earth

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer