EOS

Syndicate content Eos
Science News by AGU
Updated: 2 years 29 weeks ago

医院如何应对野火

Wed, 06/23/2021 - 13:48

This is an authorized translation of an Eos article. 本文是Eos文章的授权翻译。

野火正变得越来越严重和频繁。比城市污染更为危险的大量烟雾散布在整片大陆上,传播可吸入颗粒物,导致与烟雾有关的死亡,并使得远离实际燃烧地点的一系列医疗状况都变得恶化。

在一项新研究中,Sorensen等人按照邮政编码将可吸入烟雾颗粒浓度与当地医院重症监护病房(ICU)的入院情况进行了比较。研究发现,一个地区的烟雾颗粒物增加5天后,ICU的入院人数出现了微小但足以测量到的上升。

研究人员随后模拟了一个严重的持续一周的烟雾情景。在这种情况下,ICU入住数预计会增加131%——这足以超出ICU的容纳能力,尤其对于资源相对较少的小医院来说。

由于要照顾有生命危险的危重病人,ICU更需要资源,必须随时保有必要的设备来监测和支持每个病人脆弱的器官系统。例如,ICU通常有一对一的护士病人比例,护理的紧迫性意味着医院必须有快速的交通工具往返于各个设施之间。由于烟雾颗粒物导致的ICU入院人数激增,意味着可能会从其他住院病人那里夺走资源。当资源分配过少时,医院就不能满足病人的需求,护理可能会受到影响。

根据医院的记录,作者发现,对烟雾污染反应迅速的年轻哮喘患者在接触烟雾后会立即入住ICU,而患有心血管疾病的老年患者往往延迟入住。长期严重的烟雾对儿童的危害最大,因为他们的肺部在接触烟雾后很快就会受到影响,这意味着医院可能没有足够的时间来获取额外的资源。此外,由于儿童重症监护病房较少,较大地区范围内的儿童经常被送往同一医疗中心。

科学家预测,气候变化将带来更频繁和更强烈的森林野火。所幸目前的系统可以提前2天以精确的空间分辨率预测可吸入烟雾的排放。有了适当的系统,医院可以利用这些信息更有效地分配资源,并对那些远离火灾发生地可能没有意识到风险的地区提出预警。 (GeoHealth, https://doi.org/10.1029/2021GH000385, 2021)

—科学作家Elizabeth Thompson

This translation was made by Wiley. 本文翻译由Wiley提供。

Share this article on WeChat. 在微信上分享本文

The Possible Evolution of an Exoplanet’s Atmosphere

Wed, 06/23/2021 - 13:47

“Small terrestrial planets, where we might find life outside of our solar system, are profoundly impacted by atmosphere loss. We have no idea how common atmospheric restoration is, but it is going to be important in the long-term study of potential habitable worlds.”Researchers have long been curious about how atmospheres on rocky exoplanets might evolve. The evolution of our own atmosphere is one model: Earth’s primordial atmosphere was rich in hydrogen and helium, but our planet’s gravitational grip was too weak to prevent these lightest of elements from escaping into space. Researchers want to know whether the atmospheres on Earth-like exoplanets experience a similar evolution.

By analyzing spectroscopic data taken by the Hubble Space Telescope, Mark Swain and his team were able to describe one scenario for atmospheric evolution on Gliese 1132 b (GJ 1132 b), a rocky exoplanet similar in size and density to Earth. In a new study published in the Astronomical Journal, Swain and his colleagues suggest that GJ 1132 b has restored its hydrogen-rich atmosphere after having lost it early in the exoplanet’s history.

“Small terrestrial planets, where we might find life outside of our solar system, are profoundly impacted by atmosphere loss,” said Swain, a research scientist at the NASA Jet Propulsion Laboratory (JPL) in Pasadena, Calif. “We have no idea how common atmospheric restoration is, but it is going to be important in the long-term study of potential habitable worlds.”

The Atmosphere Conundrum

GJ 1132 b closely orbits the red dwarf Gliese 1132, about 40 light-years away from Earth in the constellation Vela. Using Hubble’s Wide Field Camera 3, Swain and his team gathered transmission spectrum data as the planet transited in front of the star four times. They checked for the presence of an atmosphere with a tool called Exoplanet Calibration Bayesian Unified Retrieval Pipeline (EXCALIBUR). To their surprise, they detected an atmosphere on GJ 1132 b—one with a remarkable composition.

“Atmosphere can come back, but we were not expecting to find the second atmosphere rich in hydrogen,” said Raissa Estrela, a postdoctoral fellow at JPL and a contributing author on the paper. “We expected a heavier atmosphere, like the nitrogen-rich one on Earth.”

To explain the presence of hydrogen in the atmosphere, researchers considered the evolution of the exoplanet’s surface, including possible volcanic activity. Like early Earth, GJ 1132 b was likely initially covered by magma. As such planets age and cool, denser substances sink down to the core and mantle and lighter substances solidify as crust and create a rocky surface.

Swain and his team proposed that a portion of GJ 1132 b’s primordial atmosphere, rather than being lost to space, was absorbed by its magmatic sea before the exoplanet’s interior differentiated. As the planet aged, its thin crust would have acted as a cap on the hydrogen-infused mantle below. If tidal heating prevented the mantle from crystallizing, the trapped hydrogen would escape slowly through the crust and continually resupply the emerging atmosphere.

“This may be the first paper that explores an observational connection between the atmosphere of a rocky exoplanet and some of the [contributing] geologic processes,” said Swain. “We were able to make a statement that there is outgassing [that has been] more or less ongoing because the atmosphere is not sustainable. It requires replenishment.”

The Hydrogen Controversy

“I find the idea of a hydrogen-dominated atmosphere to be a really implausible story.”Not everyone agrees.

“I find the idea of a hydrogen-dominated atmosphere to be a really implausible story,” said Raymond Pierrehumbert, Halley Professor of Physics at the University of Oxford in the United Kingdom, who did not contribute to the study.

Pierrehumbert pointed to a preprint article from a team of scientists led by Lorenzo V. Mugnai, a Ph.D. student in astrophysics at Sapienza University of Rome. Mugnai’s team examined the same data from GJ 1132 b as Swain’s did, but did not identify a hydrogen-rich atmosphere.

According to Pierrehumbert, the devil is in the details of how the data were analyzed. Most notably, Mugnai’s team used different software (Iraclis) to analyze the Hubble transit data. Later, Mugnai and his group repeated their analysis using another set of tools (Calibration of Transit Spectroscopy Using Causal Data, or CASCADe) when they saw how profoundly different their findings were.

“We used two different software programs to analyze the space telescope data,” said Mugnai. “Both of them lead us to the same answer; it’s different from the one found in [Swain’s] work.”

Another preprint article, by a team led by University of Colorado graduate student Jessica Libby-Roberts, supported Mugnai’s findings. That study, which also used the Iraclis pipeline, ruled out the presence of a cloud-free, hydrogen- or helium-dominated atmosphere on GJ 1132 b. The analysis did not negate an atmosphere on the planet, just one detectable by Hubble (i.e., hydrogen-rich). This group proposed a secondary atmosphere with a high metallicity (similar to Venus), an oxygen-dominated atmosphere, or perhaps no atmosphere at all.

Constructive Conflict

The research groups led by Swain and Mugnai have engaged in constructive conversations to identify the reason for the differences, specifically why the EXCALIBUR, Iraclis, and CASCADe software pipelines are producing such different results.

“We are very proud and happy of this collaboration,” said Mugnai. “It’s proof of how different results can be used to learn more from each other and help the growth of [the entire] scientific community.”

“I think both [of our] teams are really motivated by a desire to understand what’s going on,” said Swain.

The Telescope of the Future

“Every rocky exoplanet is a world of possibilities. JWST is expected to provide the first opportunity to search for signs of habitability and biosignatures in the atmospheres of potentially habitable exoplanets. We are on the brink of beginning to answer [many of] these questions.”According to Pierrehumbert, the James Webb Space Telescope (JWST) may offer a solution to this quandary. JWST will allow for the detection of atmospheres with higher molecular weights, like the nitrogen-dominated atmosphere on Earth. If GJ 1132 b lacks an atmosphere, JWST’s infrared capabilities may even allow scientists to observe the planet’s surface. “If there are magma pools or volcanism going on, those areas will be hotter,” Swain explained in a statement. “That will generate more emission, and so they’ll be looking potentially at the actual geologic activity—which is exciting!”

GJ 1132 b is slated for two observational passes when JWST comes online. Kevin Stevenson, a staff astronomer at Johns Hopkins Applied Physics Laboratory, and Jacob Lustig-Yaeger, a postdoctoral fellow there, will lead the teams.

“Every rocky exoplanet is a world of possibilities,” said Lustig-Yaeger. “JWST is expected to provide the first opportunity to search for signs of habitability and biosignatures in the atmospheres of potentially habitable exoplanets. We are on the brink of beginning to answer [many of] these questions.”

—Stacy Kish (@StacyWKish), Science Writer

Better Subseasonal-to-Seasonal Forecasts for Water Management

Wed, 06/23/2021 - 13:45

California experiences the largest year-to-year swings in wintertime precipitation (relative to its average conditions) in the United States, along with considerable swings within a given water year (1 October to 30 September). For example, 1977 was one of the driest years on record, whereas 1978 was one of the wettest. In December 2012, California was on pace for its wettest year on record, but starting in January 2013, the next 14 months were drier than any period of the entire 100-year observational record.

The considerable variability of precipitation within given water years and from year to year poses a major challenge to providing skillful long-range precipitation forecasts. This challenge, coupled with precipitation extremes at both ends of the spectrum—extremes that are projected to increase across the state through the 21st century as a result of climate change—greatly complicates smart management of water resources, upon which tens of millions of residents rely.

California and other states stand to benefit from emerging research methods that have the potential to improve the skill of subseasonal-to-seasonal precipitation forecasts.The predictive skill of long-range precipitation forecasts in this region has historically been weak, meaning scientists have not been able to aid state and local water managers with reliable forecasts of precipitation and drought for lead times longer than a week or two. The marginal success that forecasters have had to date in predicting winter season rainfall deviations, or anomalies, in California has been tied to the state of the El Niño–Southern Oscillation (ENSO). Yet ENSO explains only a fraction of the historical year-to-year variation in precipitation over California [e.g., DeFlorio et al., 2013], and many predictability studies have been limited by insufficient observational data that only recorded several large ENSO events. Further limitations have been imposed by climate models that did not have accurate enough representations of ENSO and its associated impacts on California’s weather and climate.

These weaknesses have hindered long-range planning and sometimes resulted in reactive or less-than-optimal management decisions. Now, however, California and other states stand to benefit in many ways from emerging research methods that have the potential to improve the skill of subseasonal (2- to 6-week) to seasonal (2- to 6-month) precipitation forecasts. Such forecasts could help, for example, in managing state water supplies during winters with periods of prolonged drought. Long-lasting drought conditions present unique challenges, such as the necessity for drought response activation at the state level.

A cow stands near a dry watering hole on a California ranch during drought conditions in 2014. Improved subseasonal-to-seasonal weather forecasts could benefit agriculture and ranching, among other sectors. Credit: U.S. Department of Agriculture photo by Cynthia Mendoza, CC BY 2.0

Responding to the substantial demand from end users, including water managers, the international research community has been increasingly focused in recent years on improving forecast skill and quantifying forecast uncertainty on subseasonal-to-seasonal (S2S) timescales [National Academies of Sciences, Engineering, and Medicine, 2016; Vitart et al., 2017]. Several collaborative efforts within the applied research community have detailed the potential value of S2S forecasts to a variety of end users, including (but not limited to) water resource management. Additional end user sectors that stand to benefit from improved S2S forecasts include agriculture, insurance and reinsurance, and commodities trading [Mariotti et al., 2020; Merryfield et al., 2020].

Stakeholder Needs Drive Investments in S2S Forecasting

Worldwide, the focus on S2S forecasting is steadily increasing. This impetus is represented in the World Meteorological Organization’s World Weather Research Programme (WWRP) and the S2S Prediction Project under the World Climate Research Programme (WCRP). Nationally, the U.S. Weather Research and Forecasting Innovation Act of 2017 (Public Law 115-25) mandated that NOAA improve S2S forecasts to benefit society.

Accordingly, NOAA’s Modeling, Analysis, Predictions and Projections (MAPP) program has led the development of the Subseasonal Experiment (SubX) over the past several years. This effort aims to improve subseasonal prediction of precipitation and other climate variables and to provide a public data set for the research community to explore in predictability studies [Pegion et al., 2019].

Separately, since 2017, the California Department of Water Resources (CDWR) has funded a partnership to improve S2S prediction of precipitation over the western United States, with a particular focus on California. This partnership includes the Center for Western Weather and Water Extremes (CW3E), the NASA Jet Propulsion Laboratory (JPL), and other institutional collaborators. CDWR’s motivation is largely to support drought preparedness—as long ago as California’s 1976–1977 drought, state water managers recognized that the skill of available operational seasonal precipitation forecasts was insufficient for decisionmaking.

The S2S research and development effort described here is the only project in the Real-Time Pilot Initiative that is focused on water in the western United States.The objective of the CW3E-JPL partnership is to provide water resource managers in the western United States with new experimental tools for S2S precipitation forecasting. One such tool, for example, addresses atmospheric rivers [Ralph et al., 2018], or ARs (e.g., the Pineapple Express, one “flavor” of AR), and ridging events (elongated areas of high atmospheric pressure) [Gibson et al., 2020a], both of which strongly affect wintertime precipitation over the western United States [e.g., Guan et al., 2013; Ralph et al., 2019].

The efforts of the CW3E-JPL team are also a part of the S2S Prediction Project’s Real-Time Pilot Initiative. This initiative includes 16 international research groups, each of which is using real-time forecast data from particular modeling centers, along with the S2S Prediction Project’s hindcast database for applied research efforts with a specific end user. Examples of end users participating in this project include the Kenya National Drought Management Authority, the Italian Civil Protection Department, and the Agriculture and Food Stakeholder Network of Australia’s Commonwealth Scientific and Industrial Research Organisation.

The S2S research and development effort described here is the only project in the pilot initiative that is focused on water in the western United States, and it is helping raise the visibility of the needs of western U.S. water resource managers among the international applied science community.

Different Decisions Require Different Lead Times

Water management in California and across the western United States is a challenging and dynamic operation. In addition to the fundamental influence of rainfall and snowfall in determining water supply, water management is affected by many political and socioeconomic considerations. Such considerations in water management include public health and safety minimum supply requirements for the population, which are particularly relevant during extreme drought conditions. Another consideration relevant during less extreme drought times is the prioritization of water use when there is an insufficient amount of resources to meet all objectives (balancing use for fisheries, agriculture, municipalities, etc.).

Effective management of water supply across the region requires different information at different lead times, in part because a variety of atmospheric and oceanic phenomena influence precipitation over these different timescales (Figure 1).

Fig. 1. Lead times for water management decision support needs vary over daily to decadal/century timescales, as do physical processes that affect the predictability of precipitation over the western United States.

Weather information provided over shorter lead times provides intelligence for operational decisions regarding flood risk management, emergency response, and situational awareness of potential hazards. Precipitation anomalies on the timescales of weather across the western United States are dominated by the presence or absence of ARs and ridging events. ARs are associated with bringing precipitation to the western United States. They can be beneficial or hazardous from a water management perspective, depending on AR intensity, duration, and antecedent drought conditions [Ralph et al., 2019]. Ridging events are areas of extensive high atmospheric pressure anomalies in the midtroposphere. Several different ridge types have been historically linked to drought over California [Gibson et al., 2020a].

Regulatory limits on water transfer could be better supported if we had improved precipitation forecasts with a lead time of weeks to months.Forecasts with lead times of weeks or months are more useful for decisions about asset positioning or about operational plans that can be adapted to weather outcomes as they happen. For example, state regulations associated with the California State Water Project limit water transfer amounts across the Sacramento–San Joaquin Delta. These water transfers occur because most of California’s water supply originates north of the delta, while most of the demand is south of the delta. Development of water resources infrastructure over the past century has made use of natural waterways to move water from the supply-rich region to the demand centers. Regulatory limits on water transfer could be better supported if we had improved precipitation forecasts with a lead time of weeks to months.

In addition, hydropower systems that have a chain of reservoirs could leverage better S2S forecasts to maximize the value gained from knowing which reservoirs are at capacity and which are running low at any given time. Precipitation anomalies on these timescales are influenced by both ARs and ridging, as well as by variations in the magnitude and phase of ENSO and the Madden–Julian Oscillation, a tropical atmospheric disturbance that travels around the planet every 1–2 months, for example.

On seasonal to annual scales, forecasts aid decisionmaking with respect to resourcing and budgeting that allow water managers to be prepared to respond to weather extremes, or to adopt more costly response packages that may involve legal review components such as environmental review or concurrence with regulatory mandates. Precipitation anomalies at these lead times can be influenced by ENSO and the quasi-biennial oscillation, a quasiperiodic oscillation of equatorial zonal wind anomalies in the stratosphere.

Beyond those scales, longer-term projections of climate change are used for planning adaptation and mitigation strategies. Identifying change thresholds in average precipitation or precipitation extremes can be used as triggers for implementing these strategies, which may require negotiated legislation or longer-term investment strategies.

A key goal of CDWR’s investment in near-term experimental forecasting products is to catalyze improvements in precipitation forecasting to fully implement the S2S requirement of Public Law 115-25. The need for such improvements was highlighted in the National Weather Service’s first-ever service assessment for drought, which summarized California’s drought in 2014 and stated, “A majority of the stakeholders interviewed for this assessment noted one of the best services NOAA could provide is improved seasonal predictions with increased confidence and better interpretation.”

Emerging Technologies Provide New Capabilities

The experimental forecast tools are supported by peer-reviewed hindcast assessments, which test the skill of a model by having it “predict” known events in the past.In response to the substantial need in the western U.S. water management community for better S2S precipitation forecasts, CW3E and JPL have developed a suite of research projects using emerging technical methods (Figure 2). For example, deep neural network techniques in combination with large ensemble climate model simulations will support the creation of experimental S2S forecast products for the region.

These products combine both dynamical model output from the S2S database and novel statistical techniques, including machine learning methods applied to large ensemble data sets and mathematical methods for discovering patterns and associations, such as extended empirical orthogonal function analysis and canonical correlation analysis. The experimental forecast tools are supported by peer-reviewed hindcast assessments, which test the skill of a model by having it “predict” known events in the past. There is a particular focus on applying these emerging methods to longer lead times, ranging from 1 to 6 months, over the broad western U.S. region.

Fig. 2. Quantities of interest, methods, and lead times investigated by the Center for Western Weather and Water Extremes/Jet Propulsion Laboratory S2S team to benefit water management in the western United States.

Critically, stakeholders at CDWR involved in water supply forecasting, reservoir operations, and interactions with governance for drought response provide not only funding but also direct input on the design of both research methodologies and the accompanying experimental forecast products. This research and operations partnership exemplifies an efficient applied research pipeline: End users of the forecast products ensure that the research supporting the products is designed and implemented in ways that will be useful to meet their needs, while at the same time, scientific peer review assures these end users of the forecasts’ scientific rigor.

Recently, this partnership has yielded two primary new products that are now available online and are focused on forecasting the odds of wet or dry conditions in coming weeks across the western United States. Each of these methods has been described in detail in formal publications that include quantification of their skill and reliability [DeFlorio et al., 2019a, 2019b; Gibson et al., 2020a, 2020b].

As weather across California and the U.S. West becomes increasingly variable and more difficult to prepare for, new science-based research and operations partnerships like these and others (e.g., Forecast Informed Reservoir Operations, which has supported better water supply management through skillful short-range forecasts of ARs and precipitation [Jasperse et al., 2020]) are offering enhanced abilities to see weeks and months into the future, a vital benefit for water management across the region.

The Wildfire One-Two: First the Burn, Then the Landslides

Tue, 06/22/2021 - 12:26

After the record-breaking 2020 wildfire season in California, the charred landscapes throughout the state faced elevated risks of landslides and other postfire hazards. Wildfires burn away the plant canopy and leaf litter on the ground, leaving behind soil stripped of much of its capacity to absorb moisture. As a result, even unassuming rains pose a risk for substantial surface runoff in the state’s mountainous terrain.

California has a history of fatal landslides, and the steep, burned hillsides are susceptible to flash flooding and debris flows. Fire-prone regions in the state rely on rainfall thresholds to anticipate the conditions for which postfire debris flows are more likely.

In a new study, Thomas et al. combined satellite data and hydrologic modeling to develop a predictive framework for landslides. The framework uses inputs, including vegetation reflectance and soil texture, among others, and physics-based simulation of water infiltration into the soil to simulate the hydrologic triggering conditions for landslides. The output offers thresholds to monitor the probability of landslides in the years after a burn.

The researchers tested their model against postwildfire soil moisture and debris flow observations from the San Gabriel Mountains in Southern California. The authors found that their results were consistent with recent debris flow events and previously established warning criteria. Additionally, they suggest that rainfall patterns, soil grain size, and root reinforcement could be critical factors in determining the probability of debris flows as burned landscapes recover.

The results suggest that the model could track soil hydraulic conditions following a fire using widely available rainfall, vegetation, and soil data. Such simulations could eventually support warning criteria for debris flows. The simulation framework, the authors note, could be beneficial for regions that have not historically experienced frequent fires and lack monitoring infrastructure. (Journal of Geophysical Research: Earth Surface, https://doi.org/10.1029/2021JF006091, 2021)

Learning from a Disastrous Megathrust Earthquake

Tue, 06/22/2021 - 12:24

On 11 March 2011, a 9 to 9.1 magnitude earthquake occurred off the shore of Tohoku, Japan. This was the biggest recorded earthquake in Japan and one of the five largest earthquakes in the world since the beginning of instrumental observations. It occurred in one of the best monitored areas in the world and has been extensively studied in the past decade. Research results have provided several surprises to the earthquake research community, including the earthquake’s unexpectedly large slip near the trench, the recognition of significant precursory seismic and geodetic anomalies, and the widespread and enduring changes in deformation rates and seismicity across Japan since the event. A recent article published in Reviews of Geophysics gives an overview of a decade of research on the Tohoku-oki earthquake. We asked the authors to explain the significance of this earthquake and lessons learned from it.

What are megathrust earthquakes?

Megathrust earthquakes are plate boundary ruptures that occur on the contact area of two converging tectonic plates in subduction zones. Megathrust ruptures involve thrusting of subducting oceanic plates (here the Pacific plate) under the overlying plates (here Japan as part of the North America or Okhotsk plate). Due to the unstoppable relative motion of the plates, stress accumulates in the areas where the interface of the two plates is locked and is eventually released in megathrust earthquakes.

Megathrust earthquake sources are usually located beneath the sea, which makes it difficult to make detailed observations.The world’s greatest earthquakes occur on megathrusts. Megathrust earthquake sources are usually located beneath the sea, which makes it difficult to make detailed observations based on seismic, geodetic, and geologic measurements.

Megathrusts also have the potential to produce devastating tsunamis because of the large ocean bottom vertical movement occurring during the earthquake.

Two days of aftershock recordings about a month (28 April) after the mainshock at Tono earthquake observatory (Tohoku University) in Iwate prefecture. Credit: Naoki Uchida

Prior to the Tohoku-oki earthquake, what were the gaps in our understanding of megathrust earthquakes?

Despite many studies of the Japan Trench, there was no consensus on the possibility of magnitude 9 earthquakes before the Tohoku-oki earthquake.Despite many studies of the Japan Trench, there was no consensus on the possibility of magnitude 9 earthquakes before the Tohoku-oki earthquake.

The instrumental records indicated a heterogeneous distribution of up to magnitude 8 earthquakes and repeated slips in the subduction zone. However, the records of the past 100 years did not directly address events with much longer recurrence intervals.

Land-based geodetic observations collected in the decades prior to the mainshock showed strong interplate locking offshore Tohoku. However, the resolution of these measurements was poor in the offshore area, and various ways to compensate the apparent slip deficit, including slow earthquakes, were considered to explain the discrepancies between seismologic and geodetic estimates of megathrust coupling and earthquake potential.

Since the 1980s, geological investigations of coastal tsunami sand deposits provided clear evidence of large tsunamigenic earthquakes that appeared to be substantially larger than instrumentally recorded events. However, the characterization of the ancient tsunami sources and utilization of these results in the evaluation of earthquake hazard was slow.

What exactly happened during the Tohoku-oki earthquake?

The earthquake was a megathrust event, which occurred along the Japan Trench where the Pacific plate thrusts below Japan. The mainshock rupture initiated close to a zone of slow fault slip with foreshocks on the plate interface in the previous months and a magnitude 7.3 foreshock two days prior.

Over the course of about three minutes, the fault slip propagated to fill out the rupture area of roughly 300 by 200 kilometers, catching up a slip deficit that had built up since as long ago as the 869 A.D. Jyogan earthquake. A maximum slip of about 60 meters occurred near the trench, and the resultant tsunami and shaking caused almost 20,000 deaths in Japan.

The office in which the first author was sitting at his desk at the time of the earthquake (about 180 kilometers west from the epicenter, Tohoku University, Sendai) Credit: Naoki Uchida

How has this event improved our understanding of the earthquake cycle and rupture processes?

Thanks to a decade of research, our understanding of the megathrust earthquake cycle and rupture process has improved in many aspects.Thanks to lessons learned from a decade of research, our understanding of the megathrust earthquake cycle and rupture process has improved in many aspects.

Detailed models of the earthquake slip suggest rupture occurred in an area with a large interplate slip deficit indicated by the pre-earthquake geodetic data. Knowledge of the coupling state and complex seismicity near the trench was improved by ocean bottom observations.

Additional geological surveys of tsunami deposits along the coast and observations of landslide deposits (turbidites) on the ocean bottom revealed the recurrence history of great tsunamis and earthquakes. They suggest quite frequent recurrence of tsunamigenic earthquakes that affected the Tohoku area.

The geophysical observations also identified various kinds of possible precursors before the mainshock. Understanding the uniqueness of such phenomena is important to understand the earthquake cycle and may eventually allow for issuing shorter-term earthquake forecasts.

What are ocean bottom observations and how can they improve our earthquake monitoring efforts?

Typical land surface observations of ground shaking and deformation by seismometers, tiltmeters, GPS, InSAR, and any geodetic measurements requiring the transmission of electromagnetic waves or light are difficult or impossible to record at the ocean bottom.

To complement land-based observations, seafloor systems have been developed to monitor the offshore portion of subduction zones. These include (cabled) ocean-bottom seismometers and pressure gauges, GPS-Acoustic measurements (which use sea-surface GPS and sound measurements between the surface and ocean-bottom for estimating seafloor displacements), and ranging using sound waves from ships or between ocean bottom stations.

The ocean bottom measurements better characterize coseismic and postseismic slip, help more accurately monitor the interplate coupling status, locate smaller earthquakes, and observe seismic and tsunami waves much earlier than the instruments on land.

In addition, observations of seafloor sediments provide evidence of ancient and historical great megathrust earthquakes, and boreholes drilled into the megathrust fault zone far offshore allow for examining the fault-zone materials and properties to improve the characterization of structure and fault behavior.

New offshore seismic and geodetic observation systems. (Top) The time advancement of (left) seismic and (right) tsunami wave detection thanks to the seafloor observation network for earthquakes and tsunami along the Japan trench (S-net, small red circles off NE Japan) and Dense Oceanfloor Network system for Earthquakes and Tsunamis (DONET, small red circles off SW Japan). Credit: Aoi et al., [2020], Figure 13 (Bottom) The S-net (left) and GPS-Acoustic (right) sensors awaiting deployment on ship (July 2014 and July 2012). Credit: National Research Institute for Earth Science and Disaster Resilience (left) and Motoyuki Kido (right).What additional research, data, or modeling is needed to predict future megathrust events more confidently?

Although post-Tohoku-oki studies have better characterized the hazard and a number of possible precursors have been identified, the confident prediction of such events appears impossible in the near future. More detailed investigations of earthquake cycle behavior and interplate locking from the perspective of multiple research fields will further improve the characterization of the conditions of earthquake occurrence and the associated hazard.

A comprehensive compilation of verifiable observations of long-term and short-term precursory processes, including rigorous statistical evaluation of their validity and physical understanding of the processes underlying such phenomena, is important.

While the prospects for reliable short-term prediction of destructive earthquakes may be low, probabilistic operational earthquake forecasting informed by detailed observations of earthquakes and slow-slip activity in the Japan Trench should be possible in the near future.

Why is it essential for earthquake research to be interdisciplinary?

The ability to characterize the nature and hazard of off-Tohoku earthquakes from each disciplinary perspective was limited before the Tohoku-oki earthquake. It appears that it would have been possible to ascertain the occurrence of megathrust events comparable in size to the 2011 Tohoku-oki earthquake if the results from seismic, geodetic, and geological studies had been considered together.

Thanks to a decade of data gathering and research, our understanding of the Japan Trench is much improved.Thanks to a decade of data gathering and research, our understanding of the Japan Trench is much improved compared to what was known before the Tohoku-oki earthquake. However, there are still challenges ahead for each discipline to more fully understand the various facets of megathrust earthquakes and to integrate these findings into a complete picture of the system.

—Naoki Uchida (naoki.uchida.b6@tohoku.ac.jp; 0000-0002-4220-9625), Graduate School of Science and International Research Institute of Disaster Science, Tohoku University, Japan; and Roland Bürgmann ( 0000-0002-3560-044X), Department of Earth and Planetary Science, University of California, Berkeley, USA

Gap in Exoplanet Size Shifts with Age

Mon, 06/21/2021 - 13:24

Twenty-six years ago, astronomers discovered the first planet orbiting a distant Sun-like star. Today thousands of exoplanets are known to inhabit our local swath of the Milky Way, and that deluge of data has inadvertently revealed a cosmic mystery: Planets just a bit larger than Earth appear to be relatively rare in the exoplanet canon.

A team has now used observations of hundreds of exoplanets to show that this planetary gap isn’t static but instead evolves with planet age—younger planetary systems are more likely to be missing slightly smaller planets, and older systems are more apt to be without slightly larger planets. This evolution is consistent with the hypothesis that atmospheric loss—literally, a planet’s atmosphere blowing away over time—is responsible for this so-called “radius valley,” the researchers suggested.

Changes with Age

“There’s a depletion of planets at about 1.7 Earth radii.”In 2017, scientists reported the first confident detection of the radius valley. (Four years earlier, a different team had published a tentative detection.) Defined by a relative paucity of exoplanets roughly 50%–100% larger than Earth, the radius valley is readily apparent when looking at histograms of planet size, said Julia Venturini, an astrophysicist at the International Space Science Institute in Bern, Switzerland, not involved in the new research. “There’s a depletion of planets at about 1.7 Earth radii.”

Trevor David, an astrophysicist at the Flatiron Institute in New York, and his colleagues were curious to know whether the location of the radius valley—that is, the planetary size range it encompasses—evolves with planet age. That’s an important question, said David, because finding evolution in the radius valley can shed light on its cause or causes. It’s been proposed that some planets lose their atmospheres over time, which causes them to change size. If the timescale over which the radius valley evolves matches the timescale of atmospheric loss, it might be possible to pin down that process as the explanation, said David.

“Age is one of those parameters that’s very difficult to determine for most stars.”In a new study published in the Astronomical Journal, the researchers analyzed planets originally discovered using the Kepler Space Telescope. They focused on a sample of roughly 1,400 planets whose host stars had been observed spectroscopically. Their first task was to determine the planets’ ages, which they assessed indirectly by estimating the ages of their host stars. (Because it takes just a few million years for planets to form around a star, these objects, astronomically speaking, have very nearly the same ages.)

The team calculated planet ages ranging from about 500 million years to 12 billion years, but “age is one of those parameters that’s very difficult to determine for most stars,” David said. That’s because estimates of stars’ ages rely on theoretical models of how stars evolve, and those models aren’t perfect when it comes to individual stars, he said. For that reason, the researchers decided to base most of their analyses on a coarse division of their sample into two age groups, one corresponding to stars younger than a few billion years and one encompassing stars older than about 2–3 billion years.

A Moving Valley

“We’re inferring that some sub-Neptunes are being converted to super-Earths through atmospheric loss.”When David and his collaborators looked at the distribution of planet sizes in each group, they indeed found a shift in the radius valley: Planets within it tended to be about 5% smaller, on average, in younger planetary systems compared with older planetary systems. It wasn’t wholly surprising to find this evolution, but it was unexpected that it persisted over such long timescales [billions of years], said David. “What was surprising was how long this evolution seems to be.”

These findings are consistent with planets losing their atmospheres over time, David and his colleagues proposed. The idea is that most planets develop atmospheres early on but then lose them, effectively shrinking in size from just below Neptune’s (roughly 4 times Earth’s radius) to just above Earth’s. “We’re inferring that some sub-Neptunes are being converted to super-Earths through atmospheric loss,” David told Eos. As time goes on, larger planets lose their atmospheres, which explains the evolution of the radius valley, the researchers suggested.

Kicking Away Atmospheres

Atmospheric loss can occur via several mechanisms, scientists believe, but two in particular are believed to be relatively common. Both involve energy being transferred into a planet’s atmosphere to the point that it can reach thousands of degrees kelvin. That input of energy gives the atoms and molecules within an atmosphere a literal kick, and some of them, particularly lighter species like hydrogen, can escape.

“You can boil the atmosphere of a planet,” said Akash Gupta, a planetary scientist at the University of California, Los Angeles not involved in the research.

In the first mechanism—photoevaporation—the energy is provided by X-ray and ultraviolet photons emitted by a planet’s host star. In the second mechanism—core cooling—the source of the energy is the planet itself. An assembling planet is formed from successive collisions of rocky objects, and all of those collisions deposit energy into the forming planet. Over time, planets reradiate that energy, some of which makes its way into their atmospheres.

Theoretical studies have predicted that photoevaporation functions over relatively short timescales—about 100 million years—while core cooling persists over billions of years. But concluding that core cooling is responsible for the evolution in the radius valley would be premature, said David, because some researchers have suggested that photoevaporation can also act over billions of years in some cases. It’s hard to pinpoint which is more likely at play, said David. “We can’t rule out either the photoevaporation or core-powered mass loss theories.”

It’s also a possibility that the radius valley might arise because of how planets form, not how they evolve. In the future, David and his colleagues plan to study extremely young planets, those only about 10 million years old. These youngsters of the universe should preserve more information about their formation, the researchers hope.

—Katherine Kornei (@KatherineKornei), Science Writer

Subduction Initiation May Depend on a Tectonic Plate’s History

Mon, 06/21/2021 - 13:24

Subduction zones are cornerstone components of plate tectonics, with one plate sliding beneath another back into Earth’s mantle. But the very beginning of this process—subduction initiation—remains somewhat mysterious to scientists because most of the geological record of subduction is buried and overwritten by the extreme forces at play. The only way to understand how subduction zones get started is to look at young examples on Earth today.

This schematic shows the tectonic setting of the Puysegur Margin approximately 16 million years ago. Strike-slip motion juxtaposed oceanic crust from the Australian plate with thinned continental crust from the Pacific plate. Collision between the plates near the South Island of New Zealand forced the oceanic Australian plate beneath the continental Pacific plate, giving rise to subduction at the Puysegur Trench. Credit: Brandon Shuck

In a new study, Shuck et al. used a combination of seismic imaging techniques to create a detailed picture of the Puysegur Trench off the southwestern coast of New Zealand. At the site, the Pacific plate to the east overrides the Australian plate to the west. The Puysegur Margin is extremely tectonically active and has shifted regimes several times in the past 45 million years, transitioning from rifting to strike-slip to incipient subduction. The margin’s well-preserved geological history makes it an ideal location to study how subduction starts. The team’s seismic structural analysis showed that subduction zone initiation begins along existing weaknesses in Earth’s crust and relies on differences in lithospheric density.

The conditions necessary for the subduction zone’s formation began about 45 million years ago, when the Australian and Pacific plates started to pull apart from each other. During that period, extensional forces led to seafloor spreading and the creation of new high-density oceanic lithosphere in the south. However, in the north, the thick and buoyant continental crust of Zealandia was merely stretched and slightly thinned. Over the next several million years, the plates rotated, and strike-slip deformation moved the high-density oceanic lithosphere from the south to the north, where it slammed into low-density continental lithosphere, allowing subduction to begin.

The researchers contend that the differences in lithospheric density combined with existing weaknesses along the strike-slip boundary from the previous tectonic phases facilitated subduction initiation. The team concludes that strike-slip might be a key driver of subduction zone initiation because of its ability to efficiently bring together sections of heterogeneous lithosphere along plate boundaries. (Tectonics, https://doi.org/10.1029/2020TC006436, 2021)

—David Shultz, Science Writer

Juno Detects Jupiter’s Highest-Energy Ions

Thu, 06/17/2021 - 12:16

Jupiter’s planetary radiation environment is the most intense in the solar system. NASA’s Juno spacecraft has been orbiting the planet closer than any previous mission since 2016, investigating its innermost radiation belts from a unique polar orbit. The spacecraft’s orbit has enabled the first complete latitudinal and longitudinal study of Jupiter’s radiation belts. Becker et al. leverage this capability to report the discovery of a new population of heavy, high-energy ions trapped at Jupiter’s midlatitudes.

The authors applied a novel technique for detecting this population; rather than using a particle detector or spectrometer to observe and quantify the ions, they used Juno’s star-tracking camera system. Star trackers, or stellar reference units (SRUs), are high-resolution navigational cameras whose primary mission is using observations of the sky to compute the spacecraft’s precise orientation. The SRU on board the Juno spacecraft is among the most heavily shielded components, afforded 6 times more radiation protection than the spacecraft’s other systems in its radiation vault.

https://photojournal.jpl.nasa.gov/archive/PIA24436.mp4

This animation shows the Juno spacecraft’s stellar reference unit (SRU) star camera (left) as it is hit by high-energy particles in Jupiter’s inner radiation belts. The signatures from these hits appear as dots, squiggles, and streaks (right) in the images collected by the SRU. Credit: NASA/JPL-Caltech

Despite its heavy protection, ions and electrons with very high energies still occasionally penetrate the shielding and strike the SRU sensor. This study focuses on 118 unusual events that struck with dramatically higher energy than typical penetrating electrons. Using computer modeling and laboratory experiments, the authors determined that these ions deposited 10 and 100 times more energy than deposited by penetrating protons and electrons, respectively.

To identify potentially responsible ion species, the authors examined the morphology of the sensor strikes. Although most strikes trigger only several pixels, a few events with a low incidence angle can create streaks in which energy is deposited as the particle penetrates successive pixels. Simulation software can predict the energy deposition of various particles moving through matter, providing candidates for the ions encountered by Juno. Ion species as light as helium or as heavy as sulfur could account for at least some of the observed strikes, the authors said. Species from helium through oxygen could account for all the strikes, provided they have energies in excess of 100 megaelectron volts per nucleon.

Finally, the study attributes these ions to the inner edge of the synchrotron emission region, located at radial distances of 1.12–1.41 Jupiter radii and magnetic latitudes ranging from 31° ­to 46°. This region has not been explored by prior missions, and this population of ions was previously unknown. With total energies measured in gigaelectron volts, they represent the highest-energy particles yet observed by Juno. (Journal of Geophysical Research: Planets, https://doi.org/10.1029/2020JE006772, 2021)

—Morgan Rehnberg, Science Writer

Siberian Heat Wave Nearly Impossible Without Human Influence

Thu, 06/17/2021 - 12:15

Last year was hot. NASA declared that it tied 2016 for the hottest year on record, and the Met Office of the United Kingdom said it was the final year in the warmest 10-year period ever recorded. Temperatures were particularly high in Siberia, with some areas experiencing monthly averages more than 10°C above the 1981–2010 average. Overall, Siberia had the warmest January to June since records began; on 20 June, the town of Verkhoyansk, Russia, hit 38°C, the highest temperature ever recorded in the Arctic Circle.

In a new article in Climatic Change, Andrew Ciavarella from the Met Office and an international team of climate scientists showed that the prolonged heat in Siberia would have been almost impossible without human-induced climate change. Global warming made the heat wave at least 600 times more likely than in 1900, they found.

Ciavarella said that without climate change, such an event would occur less than once in thousands of years, “whereas it has come all the way up in probability to being a one in a 130-year event in the current climate.” Ciavarella and his coauthors are part of the World Weather Attribution initiative, an effort to “analyze and communicate the possible influence of climate change on extreme weather events.”

According to the Met Office, events leading to Siberia’s prolonged heat began the previous autumn. Late in 2019, the Indian Ocean Dipole—the difference in sea surface temperature between the western and eastern Indian Ocean—hit a record high, supercharging the jet stream and leading to low pressure and extreme late winter warmth over Eurasia. This unseasonably warm weather persisted into spring and reduced ice and snow cover, which exacerbated the warm conditions by increasing the amount of solar energy absorbed by land and sea.

Cataloging the Past, Forecasting the Future

The resulting high temperatures unleashed a range of disasters. Most obvious were wildfires that burned almost 255,000 square kilometers of Siberian forests, leading to the release of 56 megatons of carbon dioxide in June. The heat also drove plagues of tree-eating moths and caused permafrost thaws that were blamed for infrastructure collapses and fuel spills, including one leak of 150,000 barrels of diesel.

“Events of precisely the magnitude that we saw, they will increase in frequency.”The researchers compared the climate with and without global warming using long series of observational data sets and climate simulations. At the beginning of the 20th century, similar extremely warm periods in Siberia would have been at least 2°C cooler, they found. Global warming also made the record-breaking June temperature in Verkhoyansk much more likely, with maximum temperatures at least 1°C warmer than they would have been in 1900.

The team also looked to the future. They found that by 2050 such warm spells could be 2.5°C to 7°C hotter than in 1900 and 0.5°C to 5°C warmer than in 2020. “Events of precisely the magnitude that we saw, they will increase in frequency, and it wouldn’t be unexpected that you would then see also events of an even higher magnitude as well,” Ciavarella said.

Dim Coumou, a climate scientist at Vrije Universiteit Amsterdam, agrees that such an event would not have happened in a preindustrial climate. “With global warming summer temperatures are getting warmer, and therefore, the probability of heat waves and prolonged warm periods are really strongly increasing,” he explained, adding that this pattern is particularly pronounced in Siberia, as the high latitudes are warming faster. Coumou was not involved in the new research.

“We should be aware that things may have global effects.”In addition to local issues (like the health impact of heat exposure, wildfires, and the collapsing of structures built on thawing permafrost), we should also be concerned about the wider impact of heat events in Siberia, said Martin Stendel, a climate scientist at the Danish Meteorological Institute. Stendel was not involved in the new research but has worked on other studies for World Weather Attribution. Thawing permafrost, for example, releases greenhouse gases such as carbon dioxide and methane into the atmosphere.

“We should be aware that things may have global effects,” he said.

—Michael Allen (michael_h_allen@hotmail.com), Science Writer

Aumento de la equidad en los espacios verdes de la ciudad

Thu, 06/17/2021 - 12:13

This is an authorized translation of an Eos article. Esta es una traducción al español autorizada de un artículo de Eos.

A medida que la pandemia de COVID-19 se extendía hacia los meses de verano de 2020, la gente de todo el mundo comenzó a acudir en masa a los espacios verdes al aire libre en las ciudades y sus alrededores. Para algunos, el desahogo seguro y socialmente distanciado del encierro consistió en hacer picnics en parques cercanos, caminar por vecindarios con árboles, caminar por senderos a través de las montañas y bosques, o simplemente tomar aire fresco en sus propios patios traseros. Sin embargo, no todos los residentes de la ciudad tienen el mismo acceso, geográfica e históricamente, a espacios verdes cercanos.

Esta época tumultuosa ha “dejado claro la gran importancia de tener un espacio verde seguro en cada vecindario”, dijo Sharon J. Hall , quien investiga la intersección de la gestión de ecosistemas, la calidad ambiental y el bienestar humano en la Universidad Estatal de Arizona (ASU, por sus siglas en inglés), en Tempe. “Sabemos que la naturaleza trae beneficios para la salud mental, beneficios físicos, conexión espiritual y comunitaria, y todo tipo de beneficios recreativos y culturales, pero no todas las personas sienten lo mismo por la naturaleza. Hay poblaciones que tienen historias, problemas y desafíos realmente largos con la naturaleza y lo que la naturaleza significa para ellas”.

El desarrollo de nuevos espacios verdes urbanos (lugares cubiertos de césped, árboles, arbustos u otra vegetación) y la infraestructura que funciona con ellos es una prioridad en muchas ciudades en estos días. Sin embargo, los expertos coinciden en que la solución es más complicada que simplemente plantar más árboles en ciertos puntos. Si se hace correctamente, agregar nuevos espacios verdes a nuestras ciudades y sus alrededores puede mejorar la salud humana, revitalizar los ecosistemas e impulsar la economía de una región. Si se hace mal, puede empeorar los problemas socioeconómicos y ecológicos existentes o incluso crear otros nuevos.

Los bosques urbanos benefician a los residentes de la ciudad

Los espacios verdes dentro y alrededor de las ciudades, conocidos colectivamente como bosques urbanos, pueden mitigar las inundaciones regionales y locales producto de tormentas, reducir la escasez de agua, mejorar la calidad del aire y del agua, regular la temperatura y ayudar al ciclo de los nutrientes del suelo, todo esto mientras secuestran carbono.

Cada árbol de ese bosque es importante. Con todo su acero, asfalto y hormigón, las ciudades suelen ser unos pocos grados más calientes en promedio que la tierra no urbanizada a sus alrededores, un fenómeno conocido como el efecto isla de calor urbano. El mismo fenómeno ocurre en una escala suburbana a un grado que depende del espacio.

“Los árboles son un factor muy importante para reducir el calor en los vecindarios”, explicó Fushcia-Ann Hoover, una hidróloga urbana cuya investigación se basa en la justicia ambiental. Es investigadora postdoctoral en el Centro Nacional de Síntesis Socio-Ambiental en Annapolis, Maryland. “Si un árbol da sombra a parte de tu casa o gran parte de tu vecindario, este será más fresco que los vecindarios donde no hay ningún árbol en la cuadra.”

“Para tener un impacto social equitativo (los espacios verdes) necesitan estar distribuidos de una manera en la que todas las comunidades obtengan beneficios de ellos”.Además, “existen beneficios culturales de tener espacios verdes dentro y alrededor de tu comunidad”, dijo John-Rob Pool, “para el esparcimiento y la recreación, lo que ha demostrado mejorar la salud y el bienestar de las personas, y para crear calles que son más habitables y accesibles”. Pool es el gerente de implementación de Cities4Forests, un programa internacional que ayuda a las ciudades a conservar, administrar y restaurar sus bosques.

Combinados, estos servicios ecosistémicos “son los beneficios más generales de los espacios verdes”, dijo Ayushi Trivedi, analista de investigación de género y equidad social en el Instituto Mundial de Recursos, “pero para tener un impacto socialmente equitativo, deben distribuirse de una manera que todos las comunidades obtengan beneficios de ellos. Esto es especialmente importante para las comunidades vulnerables (comunidades marginadas, poblaciones de bajos ingresos, comunidades de minorías raciales) que viven en vecindarios que están más expuestos al calentamiento, las inundaciones de aguas pluviales y la contaminación”.

¿Dónde están los espacios verdes?

La justicia ambiental afirma que todas las personas tienen derecho a la tierra, el agua y el aire limpios y seguros; requiere una política ambiental que esté libre de discriminación y prejuicios y se base en el respeto mutuo y la justicia para todas las personas. Al evaluar si todos los residentes de una ciudad tienen un acceso equitativo a los bosques urbanos , la primera pregunta a responder es: ¿Dónde tiene la ciudad espacios verdes? Para abordar esto a escala de toda la ciudad, la mayoría de los investigadores recopilan imágenes satelitales o aéreas, que pueden medir hasta una escala determinada, o realizan laboriosos estudios sobre el terreno.

Debido a las limitaciones de los métodos de recopilación de datos, la mayoría de los estudios que analizan la distribución de los espacios verdes urbanos se centran en solo una o dos ciudades a la vez, lo que puede dificultar el análisis de las tendencias a nivel nacional. “La cantidad de trabajo que se necesita para generar un mapa de cobertura forestal urbana de una sola ciudad es tan increíble que hacer algo a mayor escala puede ser bastante difícil”, explicó Shannon Lea Watkins, investigadora de salud pública centrada en la equidad en salud de la Universidad de Iowa. “Sabemos que el bosque urbano es diferente en todo el país porque el ecosistema es diferente. Así que esperaríamos una cantidad diferente de cobertura de árboles en Filadelfia que en Tulsa”.

“Si desglosas por características sociales demográficas, puedes ver cuáles pueden ser las implicaciones sociales”.Watkins y sus colegas juntaron muchos estudios individuales a un metanálisis en el que combinaron datos de ciudades estadounidenses tanto verdes como escasamente boscosas. Trivedi dijo que tales métodos pueden ayudar a los investigadores y urbanistas a identificar qué grupos se benefician más de un espacio verde existente o planificado. “¿Cuál es su raza? ¿Dónde viven? ¿De qué [relaciones] está compuesto su hogar? Si desglosas por características sociales demográficas, puedes ver cuáles pueden ser las implicaciones sociales. Ya sea que se trate de un mapeo o de un estudio estadístico, el simple hecho de desagregar sus datos y luego ver los patrones que surgen… será muy útil para decirte cuáles son las brechas, quién se beneficia más, quién se ve más afectado por los costos y quién corre más riesgos”.

Por ejemplo, “en la mayoría de los estudios hay un patrón demostrado entre los ingresos y la cubierta forestal urbana; es decir, mayores ingresos se asocian con una mayor cobertura forestal urbana”, explicó Watkins. Es más, en todo el país, la desigualdad racial en la cubierta forestal urbana es mayor en terrenos públicos que en terrenos privados: las residencias privadas con patios y calles arboladas son más comunes en los vecindarios de mayores ingresos y predominantemente blancos, y lo mismo ocurre en un grado aún mayor para los parques de propiedad pública y las áreas boscosas.

El tipo de espacio verde importa

“[Históricamente] los vecindarios marginalizados tienen menos espacios verdes, y el espacio verde que tienen tampoco es de tan alta calidad”.Una vez que sepas dónde están los bosques urbanos, es útil analizar qué forma adoptan, porque no todos los tipos de espacios verdes brindan los mismos beneficios a los residentes cercanos. Hoover, quien fue coautor de un artículo reciente que examina la raza y los privilegios en los espacios verdes, explicó que “[históricamente] los vecindarios marginalizados tienen menos espacios verdes, y el espacio verde que tienen tampoco es de tan alta calidad”.

Los parques, por ejemplo, se ven muy diferentes en áreas urbanas que están más vigiladas, que tienden a ser vecindarios con más personas de color, más personas con inseguridad habitacional o más personas con ingresos más bajos. “Si un árbol bloquea la línea de visión de una cámara de la policía, por ejemplo, el árbol se corta o se poda drásticamente por lo que ya no brinda sombra de manera efectiva” ni refrequesca a el área, dijo Hoover.

En estos vecindarios, “los parques no están necesariamente hechos para ser lugares donde la gente se sienta o se relaja”, explicó Hoover. “Son lugares de paso. Creo que eso también refleja la forma en que se criminaliza a las personas con inseguridad habitacional y la forma en que las ciudades a menudo responden a las personas con inseguridad habitacional al querer evitar que establezcan un campamento o puedan acostarse en un banco”.

Los lotes baldíos que han sido renaturalizados pueden aportar espacios verdes, dijo Theodore Lim, pero los beneficios de ese espacio para la comunidad circundante serán mucho menos estratégicos que los beneficios de un parque planificado. “Uno se desarrolla en condiciones de crecimiento y planificación proactiva, y el otro se desarrolla en condiciones de declive y planificación reactiva”, explicó. “A menudo eres oportunista acerca de dónde puedes obtener servicios de los ecosistemas”. Lim investiga las conexiones entre la tierra, el agua, la infraestructura y las personas en la planificación de la sostenibilidad en el Instituto Politécnico de Virginia y la Universidad Estatal de Blacksburg.

“Los espacios verdes pueden ocurrir en cualquier lugar…. Son estos espacios accidentales intermedios los que a veces son las formas más creativas de pensar en los espacios verdes”.“En las ciudades, creo que debemos ser más integrales con nuestra forma de pensar sobre los espacios verdes”, dijo Hall. “Los espacios verdes pueden ocurrir en cualquier lugar…. Son estos espacios accidentales intermedios los que a veces son las formas más creativas de pensar en los espacios verdes”.

Ya sean proactivos o reactivos, para que beneficien a una comunidad, “los espacios verdes urbanos deben diseñarse caso por caso según el clima, la geografía, las condiciones del suelo y las necesidades de suministro de agua de esa área”, dijo Kimberly Duong, ingeniera de recursos hídricos y directora ejecutiva de Climatepedia. “En una región agrícola, por ejemplo, un espacio verde sostenible probablemente dependería de los ciclos estacionales de precipitación. En una región propensa a la sequía, un espacio verde también podría considerar estrategias de retención de agua”.

“Estaba diseñando una calle verde para [un área cerca de la Universidad de California, Los Ángeles] que incorpora conceptos de sostenibilidad, conceptos de captura de aguas pluviales y conceptos de espacios verdes”, dijo Duong. “Esa región tiene mucho suelo arcilloso”, lo que significaba que instalar pavimento permeable no era una opción porque el agua penetraría en la acera pero no en el suelo. “Pero para otras regiones con suelo más arenoso, donde el agua puede absorberse más fácilmente, un pavimento permeable podría ser una estrategia para un estacionamiento [para capturar aguas pluviales en el sitio]”.

“Hay estrategias en muchas escalas geográficas diferentes”, dijo Duong, desde barriles de lluvia hasta drenajes sostenibles y desde jardines de lluvia hasta cuencas hidrográficas.

La propiedad comunitaria es clave

Los espacios verdes deben diseñarse intencionalmente para satisfacer las necesidades que la comunidad ha identificado para que los residentes se sientan cómodos usándolos. Tal estrategia de diseño requiere el compromiso y el diálogo entre las comunidades y los administradores de proyectos.

“La gente, teóricamente, puede tener la misma cantidad de acceso a acres de espacio en el parque, pero aún no se siente bienvenido o seguro en ese espacio del parque”, dijo Lim. “Se trata de reconocer que hay problemas sistémicos que dan forma a las experiencias de las personas y que tienen raíces realmente históricas”.

Por ejemplo, “un hombre blanco podría irse solo al bosque y obtener todo tipo de beneficios espirituales al estar solo allí”, dijo Hall. Pero para las personas a las que se les ha hecho sentir incómodas o inseguras al aire libre debido a su género, raza u otro aspecto de su identidad, continuó, esa experiencia histórica puede ser muy diferente.

“Las soluciones basadas en la naturaleza deben tratarse como cualquier otra infraestructura y merecen el mismo enfoque participativo durante las etapas de planificación”.También hay relaciones históricas positivas a considerar, agregó. “Podrías pensar en las poblaciones latinas que viven en el suroeste; el desierto podría tener un significado diferente para ellos si tienen una historia con el desierto a través de sus familias y de generaciones”.

“Cuando una ciudad, por ejemplo, planea una nueva estación de tren, se compromete con los residentes sobre dónde deben colocarla, quién la necesita, si la usarán los residentes si la colocan aquí o si la colocan allá”, dijo Pool. “Las soluciones basadas en la naturaleza deben tratarse como cualquier otra infraestructura y merecen el mismo enfoque participativo durante las etapas de planificación. Creo que la razón por la que esto aún no es tan común es que es un campo emergente”.

Muchos residentes de Detroit, por ejemplo, expresaron la creencia de que la ciudad había descuidado o mal administrado los espacios verdes y los árboles en sus vecindarios. Debido a ese precedente histórico, la gente desconfió cuando una organización local sin fines de lucro les ofreció árboles gratis para plantar frente a sus casas. A pesar de querer vecindarios más verdes, una cuarta parte de los residentes rechazó la plantación de nuevos árboles, anticipando que la ciudad también les negaría ese espacio verde.

“No habrá un enfoque único para todos” para crear nuevos espacios verdes urbanos o para garantizar la equidad en esos espacios, dijo Hall. “Lo que va a ser bueno para los polinizadores o la gente en Washington, DC, puede ser muy diferente de lo que va a funcionar en el Desierto de Sonora en Phoenix. Y aún así, la historia de Phoenix es muy diferente a la historia de Albuquerque o Los Ángeles. Los enfoques deberán determinarse localmente, sobre qué tipos de plantas vas a plantar y qué va a ser realmente bueno para la historia de una comunidad”.

Cómo son las soluciones impulsadas por la comunidad

“En realidad, nadie te capacita sobre cómo ser un investigador comunitario. Se aprende haciéndolo”.Digamos que eres un geocientífico con una idea de cómo mejorar un vecindario urbano agregando más espacios verdes y quieres que el proyecto sea un proceso participativo. ¿Cómo logras entonces que la comunidad se sume? “En realidad, nadie te capacita sobre cómo ser un investigador comunitario. Se aprende haciéndolo”, dijo Marta Berbés-Blázquez. “Escaneas las noticias, escaneas Facebook, comienzas a seguir a activistas en una región, comienzas a averiguar quién es quién. Eso lleva un poco de tiempo y mucho es muy sutil”. Berbés-Blázquez investiga las dimensiones humanas de las transformaciones socioecológicas en ecosistemas rurales y urbanos como profesora asistente en ASU.

“Podría ir a un evento comunitario aleatorio”, continuó. “Podría ir a un seminario web o asistir a reuniones comunitarias. Y me sentaría en segundo plano y escucharía y no hablaría”. Al hacer esto, un investigador aprende qué temas están al frente de la agenda de una comunidad, quiénes son los líderes clave y qué problemas históricos o sistémicos enfrenta la comunidad.

Después de que tantos residentes rechazaron los árboles gratuitos, por ejemplo, esa organización sin fines de lucro de Detroit cambió su enfoque para incluir a las comunidades en el proceso de toma de decisiones con respecto a los tipos de árboles y dónde plantarlos. También amplió su programa de empleo juvenil para mantener los árboles y enseñar a los residentes sobre ellos.

“Creo que la tendencia es que los geocientíficos se centren en el análisis de datos”, dijo Duong, “y luego señalarlos y decir: ‘Esto tiene sentido para fines científicos. Tenemos tanto déficit de agua, por lo tanto, llevar a cabo esta estrategia [proporcionaría] el 200% de la cantidad de agua que necesitamos’”. Estos análisis son ingredientes necesarios en cualquier proyecto de infraestructura verde, pero hay otras consideraciones que van más allá del alcance. de la expertise de un geocientífico. “Eso no toma en cuenta las consideraciones políticas, el presupuesto requerido, el mantenimiento requerido o la interrupción de la comunidad durante la construcción. Esos son componentes no triviales de la implementación de proyectos de espacios verdes.”

Al dar un paso atrás y aprender sobre la comunidad antes de iniciar un proyecto, un geocientífico podrá evaluar los riesgos específicos del vecindario, como nuevos espacios verdes atractivos que elevan los alquileres, y establecer medidas para proteger a los residentes de daños. “Tener esos mecanismos en su lugar ha demostrado que se pueden reducir algunas de estas crisis de gentrificación verde que están ocurriendo”, dijo Trivedi.

“La justicia ambiental no es solo una distribución equitativa de los recursos, sino también un acceso equitativo a la toma de decisiones”.Considera, por ejemplo, el proyecto 11th Street Bridge Park de Washington, DC, un parque tipo puente recreativo que cruzará el río Anacostia en el pabellón 7 y pabellón 8, áreas que son mayoritariamente negras y tienen ingresos más bajos que el promedio de DC. Los proyectos de infraestructura verde en vecindarios con demografías similares han creado, en el pasado, crisis de gentrificación que, en última instancia, perjudicaron a los residentes. Los residentes de los pabellones 7 y 8 inicialmente rechazaron el desarrollo de un parque tipo puente en sus vecindarios exactamente por esas razones. En respuesta, los gerentes del proyecto se asociaron con líderes comunitarios para crear estrategias de desarrollo centradas en la equidad: estableciendo fideicomisos de tierras comunitarias, salvaguardando inversiones en viviendas asequibles, brindando capacitación y empleos para los residentes locales e invirtiendo en pequeñas negocios locales.

El proceso de desarrollo conjunto de soluciones no es fácil, dijo Berbés-Blázquez, y la estructura de la investigación académica, como los ciclos de subvenciones o los relojes de permanencia, a menudo puede interferir. “La velocidad a la que tienen que suceder los proyectos, ya sea académica o políticamente, no necesariamente da suficiente tiempo para fomentar relaciones verdaderas, genuinas y de confianza entre los diferentes actores involucrados”, dijo. “No traigas tu propia agenda, pero si la tienes, déjala muy clara. Y luego se paciente” y esté dispuesto a reconocer y reconocer cuando cometes errores.

Organizaciones lideradas por la comunidad que se enfocan en reverdecer las ciudades están trabajando en todo el país, dijo Hoover, y cada una sabe cómo los científicos pueden ayudarles mejor a lograr sus objetivos. “Realmente animaría a otros científicos, planificadores, profesionales e investigadores a que comiencen a escuchar y a comunicarse”, dijo, “para aprender y superar realmente los límites de sus propios campos y sus propias suposiciones dentro de su ciencia”.

“La justicia ambiental no es solo una distribución equitativa de los recursos, sino también un acceso equitativo a la toma de decisiones”, dijo Watkins.

Kimberly M. S. Cartier (@AstroKimCartier ), Escritora de ciencia

This translation by Mariana Mastache Maldonado (@deerenoir) was made possible by a partnership with Planeteando. Esta traducción fue posible gracias a una asociación con Planeteando.

Fingerprints of Jupiter Formation

Wed, 06/16/2021 - 17:16

Gas giants exert a major control on solar system architecture. Observations of disks like those by the Atacama Large Millimeter/submillimeter Array (ALMA) reveal early stages of planet formation in far-away stellar systems. But timing of giant planet formation processes can best be traced via detailed investigations possible in our solar system. A commentary by Weiss and Bottke [2021] closely examines Jupiter formation, using clues preserved in the meteorite record. They find current data are consistent with an initial “slow growth” phase for Jupiter that created separate isotopic reservoirs for meteorite parent bodies. Subsequently, paleomagnetic data suggest rapid dissipation of the nebular field, most easily explained by rapid (greater than 30 times) growth of Jupiter, which supports a core accretion physical model for giant planets. The case is not closed, however. Weiss and Bottke propose further observations and physical modeling to establish the pacing of Jupiter formation and its effects on the architecture of our solar system.

Citation: Weiss, B. & Bottke, W. [2021]. What Do Meteorites Tell Us About the Formation of Jupiter? AGU Advances, 2, e2020AV000376. https://doi.org/10.1029/2020AV000376

—Bethany Ehlmann, Editor, AGU Advances

Why Contribute to a Scientific Book?

Wed, 06/16/2021 - 12:38

AGU believes that books still play an important role in the scientific literature and in professional development. As part of our publications program, we continue to publish traditional books but are also seeking to innovate in how we collate, present, and distribute material. However, we are aware that some scientists are skeptical about the value of being involved in book projects, either as a volume editor or as a chapter author. One common concern is that the process of preparing books for publication is much slower than journals. There is also a perception that book content is not as easily discoverable as journal articles. Some people may feel that the era of the book has passed now that technology has changed the ways in which we find and interact with written material. Here we give responses to some of the questions and concerns that we frequently hear and explain advantages of choosing AGU-Wiley for a book project.

Why should I choose to publish a book with AGU?

AGU’s publications program has a strong, authoritative reputation in the Earth and space sciences. This includes a six-decade history of publishing books, with the long-standing Geophysical Monograph Series being the best-known part of the collection. Publishing with AGU is a mark of quality. All individual book chapters undergo full peer review to ensure quality and rigor, and entire book manuscripts undergo a full assessment by a member of the Editorial Board before being approved for publication.

Does AGU only publish scientific monographs?

We offer a home for books on all topics in Earth, environmental, planetary and space sciences, as well as publications that support the geoscience community on topics such as careers, the workforce, and ethics. We publish scientific research, advanced level textbooks, reference works, field guides, technical manuals, and more. We want our books to be relevant and useful for the twenty-first century classroom, laboratory, and workplace so we are open to ideas for different types of books and exploring new ways of publishing material.

What are the advantages of edited books over journal special collections?

While there are some similarities between a special collection of articles in a journal on a particular theme and an edited book, we believe that the book format offers a few distinct advantages. First, books give more space and freedom. You can tell a more complete story in a book by organizing chapters into a deliberate order that presents a narrative arc through all aspects of the topic. Second, books are a great medium for interdisciplinary topics. You can pull together a mixture of material that may not have a comfortable home in a single journal. Participating in a book project is thus an opportunity to go to the borders of your discipline and collaborate with colleagues from other disciplines, including in the social sciences.

What kind of experience will I have as a book editor with AGU-Wiley?

There are staff in the AGU Publications Department and at Wiley dedicated to AGU’s books program. We are committed to offering a great experience to volume editors, chapter authors, and peer reviewers, and to producing books of the highest quality. In addition, the AGU Books Editorial Board, comprising members of the scientific community, is on hand to support editors throughout the publication process. The editors (or authors, if an authored volume) of each book are assigned a member of the Editorial Board to offer 1:1 interaction, feedback, and advice whether you are a first-timer or have prior experience.

How can people find and cite my book content?

Book chapters are much more discoverable these days. AGU’s books are hosted on Wiley Online Library, where whole books and book chapters come up in search results alongside journal articles. Not everyone needs or wants a whole book, so individual book chapters can be downloaded as PDF files. Each book chapter has its own unique DOI making it more discoverable and citable. AGU’s books are also indexed by the major indexing services, such as Web of Science, SCOPUS, and the SAO/NASA Astrophysics Data System, enabling the tracking of citations.

How will my book get promoted?

AGU is a network of 130,000 Earth and space science enthusiasts worldwide. Once a book is published, it will be promoted to the whole network, as well as to targeted subject groups, via blogs, social media, newsletters, and more. At AGU Fall Meeting, your published book will be on display where as many as 30,000 people have the chance to see it. In addition, Wiley, as an international publisher, has a global network for marketing and sales.

How are AGU and Wiley adapting to changes in publishing?

Scholarly books tend to be slightly behind the curve in terms of new technologies but AGU, in partnership with Wiley, are testing new publishing models in response to changes in the landscape of scholarly publishing, science funding, and user demand. For example, in 2020 we piloted the Open Access publishing model for two books (Carbon in Earth’s Interior and Large Igneous Provinces) and are exploring ways to make this a publishing option for anyone with the funding for open access. We are also currently exploring a chapter-by-chapter publication model to make book content available faster.

What are the professional and personal benefits of doing a book?

There is a perception that books are written by people at the end of their careers and that they do not offer the same advantages as journal articles in terms of scholarly value. However, anyone can contribute to a book, and it counts as part of tenure or promotion applications in many places. Acting as an editor of a book is a chance to work with many other scientists, both those writing chapters and those acting as peer reviewers. This is an opportunity to widen your professional network; to work with new people, perhaps from different disciplines; and to make your name more recognizable, including by those who are not directly following your work. Editing a book can also be regarded as service to your scientific community. Your book may become the definitive book in the field and define the rest of your career.

We welcome ideas for new books at any time. Please contact me or a member of the Editorial Board, and we will be happy to discuss.

―Jenny Lunn (jlunn@agu.org; 0000-0002-4731-6876), Director of Publications, AGU

Book Publishing in the Space Sciences

Wed, 06/16/2021 - 12:38

AGU has been publishing books for over six decades on topics across the Earth and space sciences. The four members of the AGU Books Editorial Board from space science disciplines decided to look at our own backfile of books and at other scientific publishers to better understand the landscape of book publishing in these disciplines and use these insights to form a plan for expanding our range of space science books over the coming years.

Space science books in AGU’s portfolio

Our investigation started with our own backfile, focusing on AGU’s flagship and longest-running series, the Geophysical Monograph Series. 256 volumes were published from its launch in 1956 to the end of 2020, of which, 63 were related to space science topics, with six volumes (Vol. 1, 2, 7, 141, 196, and 214) combining both Earth- and space-related topics in the same volume.

For the next step of our analysis, we divided the space sciences into major topic areas according to four AGU Sections – Aeronomy, Magnetospheric Physics, Planetary Sciences, and Solar & Heliospheric Physics. We decided to combine Aeronomy and Magnetospheric Physics into a single category of Geospace.

Over the entire lifetime of the Geophysical Monograph Series, the proportions of space science books with topics exclusively focused on geospace, planetary sciences, and solar/heliospheric physics are 51%, 7%, and 3.5%, respectively. The remaining 38.5% are books with an interdisciplinary character, that is, they combine the major topic areas in the same volume. (The six books that combine Earth- and space-related topics are not included in these numbers.)

The Venn diagram focuses on the 12 books published in the past decade (2011-2020) (Not included are two books (vols. 196, 214) that combined space and Earth science topics).

Books with a geospace focus are the largest segment (vols. 199, 201, 215, 220, 244, 248) and six additional geospace books have an interdisciplinary focus (vols. 197, 207, 216, 222, 230, 235).

No book has been published in the Geophysical Monograph Series solely focused on planetary or solar/heliospheric topics during the decade, although a five-volume Space Physics and Aeronomy collection was published in spring 2021 outside the bounds of this analysis.

How do we compare to other publishers?

Of course, AGU is not the only society or publisher producing books in the space sciences, so we decided to look at the distribution across major scholarly publishers using the same topic division as above. We made our best efforts to survey the enormous book market using various web tools, but we cannot guarantee the accuracy of these statistics. Also note that AGU entered a publishing partnership with John Wiley & Sons in 2013, thus we are treating Wiley and AGU as one entity in the charts below.

The charts suggest that certain publishers have developed a reputation in particular fields, with existing series or collections that draw return book editors/authors and attract new people. It is important that an editor or author finds the most appropriate publishing partner for their book project who can offer the production services, marketing, and distribution they are looking for. We hope that scientists across all these disciplines will consider AGU-Wiley, as we have a lot to offer.

Where are we heading?

AGU/Wiley have published on average 1 to 2 books per year over the last four decades in the space sciences, with a spread of 0 to 4 books per year. This is a modest publication rate which we would like to increase.

The graph shows the current distribution (orange) shifted by two books per year to a proposed distribution (yellow). During this decade, we would like to see on average 3 to 4 books published per year, but occasionally even more if possible.

We would also like this growth to be more balanced across all fields in the space sciences to move away from the skew towards geospace books seen in the past. By adding one book each on planetary and solar/heliospheric topics, already we would achieve our goal. Hence, we want to grow in these areas, and we are putting enhanced efforts into our outreach to the planetary and solar communities, which might help shift the distribution as indicated in the graph.

How to achieve our goals?

Overall, we wish to engage the space science communities in publishing more books with AGU-Wiley. To achieve our goals, we are actively engaging in outreach by proposing book titles to prospective editors/authors. Nonetheless, our backbone is still the unsolicited approach by scientists with new book ideas. For example, we hope that the new planetary and solar spacecraft missions currently in operation (as well as those being planned) may yield results, analysis, and reviews that could be suitable for publication in book format.

Although we have numerical goals for growing the number of space science books in our portfolio, our focus remains on quality over quantity. After all, books should be useful and in demand by the science community.

We want to encourage scientists who have never considered to publish a book. The process of organizing and writing/editing a book is very rewarding and increases one’s own network of scientific collaboration and reputation. Please contact any member of the Editorial Board from the space sciences directly or email books@agu.org, if you have ideas for new books.

―Andreas Keiling (keiling@berkeley.edu;  0000-0002-8710-5344), Bea Gallardo-Lacourt ( 0000-0003-3690-7547), Xianzhe Jia ( 0000-0002-8685-1484), and Valery Nakariakov ( 0000-0001-6423-8286), Editors, AGU Books

New Editorial Board for AGU Books Takes Inventory

Wed, 06/16/2021 - 12:37

AGU has been producing books as part of its publications program for over six decades. The goal has been to produce volumes on all topics across the Earth and space sciences that are a valuable resource for researchers, students, and professionals. It is quite a challenge to cover such an enormous scientific space and breadth of audiences. So how have we been doing?

A new Editorial Board for AGU Books was established in 2020. In our first year, we have taken time to look back on our historic backfile of books and evaluate how the program has grown and changed over time. We wanted such data and analysis to shape realistic goals for the books program going forward and focus our outreach to the scientific community for new book ideas. Here we give an overview of all books in our portfolio; a separate piece focuses on books in the space sciences.

Establishment and growth of AGU Books

AGU published its first book in 1956, Volume 1 of the Geophysical Monograph Series (GMS). The GMS remains AGU’s primary and flagship series for scientific work, and is the source of the data presented here, but there are two other active series – Special Publications and the Advanced Textbooks Series – as well as more than a dozen archive series.

From its launch in 1956 to the end of 2020, 256 volumes were published in the GMS.

As shown in the chart on the right, fewer than ten books were published per decade during the first three decades.

This rose to more than 60 books per decade during the three most recent decades, an average of 6 to 7 books per year.

The two main branches: Earth and Space

To gain further insight, we designated these books into “Earth sciences” or “Space sciences,” the two main branches represented in AGU. This simple division makes sense because there is a significant amount of interdisciplinary research within Earth-related and space-related topics, but far less so across the two.

Of the 256 monographs, 77% were related to Earth science topics and 23% were related to space science topics, with just six volumes combining both (and not included in this count). During the past decade (2011-2020), the proportions were 80% and 20%, respectively, as shown in the chart on the right.

For comparison, we looked at the proportion of AGU members according to their primary Section affiliation as at the end of 2020 (excluding cross-cutting sections such as Education, Science & Society, and Earth & Space Science Informatics) and found that the proportion was 88% (Earth) and 12% (Space).

The difference in member affiliation between the two branches largely accounts for the difference in book numbers.

Of course, this dataset is small and limited, just showing books published in one AGU series; it does not reflect all publications by AGU members, such as journal articles and books published with other publishers.

However, we focus this analysis on the past and future of the Geophysical Monograph Series as we want to ensure that it represents the AGU community and produces books that meet their professional needs.

A closer look at the decadal distribution of books reveals further trends. Earth science started with an average of about one book per year until the 1980s, after which the number steadily rose until the 2010s with an average of six books per year. Currently, we are at about five books per year. However, the actual number of books per year has varied between zero and eleven books. In comparison, the space sciences only started to pick up the publication rate in the 1980s, with its heyday being in the 1990s with an average of two books per year. In recent decades, most years we published at least one space science book per year, with the highest in any single year being four books. (The six volumes that contained both Earth and space topics were counted twice here, once for each branch.)

To go a little deeper, we analyzed books by topic within the “Earth” and “Space” categories. This was not a straightforward exercise, as many books straddle multiple topics (for example, should a book about earthquakes be categorized as seismology or natural hazards?) so we had to make certain choices about best fit for primary topic and ensure consistency in this decision-making process. While such simplistic categorization is a little problematic, the results still reveal some trends in the spread of topics covered. The chart shows the breakdown of topics within the Earth sciences; the accompanying piece delves a little deeper into the “space physics” category.

Looking to the future

We hope to maintain, or even grow, the rate of AGU’s book publications. We are looking for new book ideas and for scientists interested and willing to be volume editors or volume authors. We also want to encourage scientists who have never considered the book format as a way to communicate their science. Find out more about the professional advantages of producing a book and the experience of doing this with AGU and Wiley. Please contact us if you have ideas for new books. The AGU Books Editorial Board is here to help and encourage. Contact a member of the Editorial Board directly or email books@agu.org.

―Andreas Keiling (keiling@berkeley.edu;  0000-0002-8710-5344), Editor in Chief, AGU Books; and Jenny Lunn ( 0000-0002-4731-6876), Director of Publications, AGU

A Life at Sea: A Q&A with Robert Ballard

Wed, 06/16/2021 - 12:36

Robert Ballard—the man who found Titanic—has explored the ocean for more than 60 years with ships, submersibles, and remotely operated vehicles. Now, through the Ocean Exploration Trust, he continues to search the seas for archaeological wonders, geological oddities, and biological beasts.

His new memoir, Into the Deep, takes readers on a vivid tour of his adventures while diving into his struggles—and triumphs—as he navigated academia and the ocean without knowing he was dyslexic.

This conversation has been edited for length and clarity.

 

Eos: How and when did you find out you were dyslexic?

Ballard: I didn’t know I was dyslexic until I was around 62 years old. I’m 79. There’s a beautiful book called The Dyslexic Advantage. I have an audio version, which is much easier for dyslexics. When I listened to it, I cried because it explained me to me for the first time in my life. I knew I was different, and now I understand that difference.

Eos: What advice do you have for those who struggle with dyslexia?

Ballard: The educational experience in many ways for a dyslexic is, How do you survive it? How do you cross through this desert? And fortunately, I had oases along the way, which were teachers that bet on my horse. The real key is surviving the teachers who would rather put you on drugs or would rather not have you in their classroom.

Eos: What can parents do?

Ballard: Early detection is really critical, and also being an advocate.

Eos: What can the education systems do better for children?

Ballard: Make sure that they don’t lose their self-esteem [and] that they don’t believe that they’re stupid. They’re not. They have a gift. We’re such visual creatures.

Eos: How have you come to view being dyslexic as an asset?

“We were just bowled over when we came across those ecosystems. We didn’t even have biologists [on this expedition]! Biologists turned us down.”Ballard: I stare at things until I figure them out. You want eyes on the bottom [of the ocean], and that’s what us dyslexics are all about. I can stand in the middle of my command center, close my eyes and go there…without physically being there.

Eos: Like with Titanic?

Ballard: I wasn’t supposed to find the Titanic. I was on a top-secret mission financed by naval intelligence to look at a nuclear submarine [that sank with] nuclear weapons. I discovered that [the submarine hadn’t] just [imploded and sank to] the bottom. When I went to map it, I found that I could map three corners of it, but then there was a bit like a comet on the bottom of the ocean because the current…created a long debris trail. I said, “Well, wait a minute. Don’t look for the Titanic. Look for this long trail.” This is visual hunting.

Eos: And so you found the Titanic in 1985, for which you became famous! But what about the 1977 discovery of hydrothermal vents—and life, like giant clams—near the Galapagos?

Ballard: We were looking for hot water coming out [of the oceanic ridge]. We were just bowled over when we came across those ecosystems. We didn’t even have biologists [on this expedition]! Biologists turned us down!

Eos: What did you do without biologists or the ability to “Zoom” them onto your ship?

Ballard: We called back to Woods Hole [Oceanographic Institution]. [Biologists there] said, “Take core samples every so many meters.” There’s no sediment. They said, “That’s impossible. You can’t have clams! You can’t have everything you’re talking about!”

“I am sympathetic to anyone who is being told that they’re different. I want everyone in the game.”Eos: What’s the significance of discovering life where it wasn’t supposed to be?

Ballard: Because of this discovery of chemosynthetic life systems that can live in extremely extreme environments, I’m confident there’s life throughout the universe. It’s now driving NASA’s program to look at the larger oceans in our solar system, [like] on Enceladus, that have more water than we have.

Eos: Let’s talk about your current exploits on Nautilus with the Corps of Exploration. In the book, you note that you hire at least 55% women for the corps. What about other groups?

Ballard: I am sympathetic to anyone who is being told that they’re different. I want everyone in the game. I’ve said to my team [that] I want every conceivable kind of person on our team, because I want children to find their face.

A sampling of individuals in the Corps of Exploration, the team that powers Ballard’s adventures on Nautilus. Ballard has instructed his team to ensure that at least 55% of people in the corps are women. Credit: Robert Ballard

Eos: How do you encourage people who aren’t necessarily interested in oceanography or are afraid of the ocean to work with you?

Ballard: Our technology. You don’t have to go out on the ocean if you don’t want to. A lot of what we’re able to do now [is] because of the telepresence technology we’ve pioneered.

Eos: And of course, that technology enhances your ability to communicate with the public, as you discuss in the book. You’ve been at the fore of science communication for most of your career. How has academia changed on that front?

“Science is the luxury of a wealthy nation.”Ballard: It’s less hostile. I was severely criticized [for communicating with the public]. Science is the luxury of a wealthy nation. It’s the taxpayers that pay for you. You need to thank them, and you need to say, “Let me tell you what I’m doing with your money and why it’s important!” That’s storytelling and communicating.

Eos: What advice do you have for scientists who communicate with the public?

Ballard: If you can’t tell a middle school kid what you’re doing, you don’t know what you’re doing. Don’t simplify science. Explain it in a way that people can absorb it. Don’t talk down. Don’t talk up. Talk straight across.

—Alka Tripathy-Lang (@DrAlkaTrip), Science Writer

Indian Cities Prepare for Floods with Predictive Technology

Tue, 06/15/2021 - 12:30

Urban floods in India have caused fatalities, injuries, displacement, and enormous economic losses. Cities across the country are now investing in high-tech tools to better model and forecast these natural hazards.

In 2015, the metropolis of Chennai faced devastating floods responsible for the deaths of more than 500 people and displacement of more than a million more. Financial losses reached around $3 billion. The extent of the damage prompted the Indian government to approach scientists to develop a flood forecasting system for the city.

Subimal Ghosh, a professor of civil engineering at the Indian Institute of Technology Bombay, led the efforts. Chennai’s topography makes it particularly vulnerable, Ghosh said. In addition to being a coastal city, Chennai has many rivers and an upstream catchment area from which water flows when there is heavy rainfall.

Forecasting in Chennai

The city’s topography determines where inundation occurs and made the development of a flood forecasting system complex. The system had to include the hydrology of the upstream region; river, tidal, and storm surge modeling; and a high-resolution digital elevation map of the city, Ghosh said.

A consortium of scientists from 13 research institutes and government organizations worked on these separate aspects and together developed India’s first fully automated real-time flood forecasting system, launched in 2019.

“We generated 800 scenarios of flood and tide conditions,” Ghosh said. “When the model receives a weather forecast from the National Centre for Medium Range Weather Forecasting, it will search and find the closest scenario. If there is a chance of flood, the model will predict the vulnerable sites for the next 3 days.”

Sisir Kumar Dash is a scientist at the National Centre for Coastal Research (NCCR) in Chennai, which is responsible for the operation of the model. “We analyze daily rainfall data, and if there is a probability of inundation, the model is run, and alerts are sent to the state disaster management department,” he said.

Since the tool was implemented, however, Chennai has not experienced heavy rainfall, so it has not been put to a strong test.

Forecasting in Bengaluru

Bengaluru, formerly known as Bangalore, has seen some success with its own flood forecasting system, according to scientists at the Indian Institute of Science (IISc) in Bengaluru and the Karnataka State Natural Disaster Monitoring Centre. The organizations developed the system together.

P. Mujumdar, a professor of civil engineering at IISc who led the work, said that “short-duration rainfall forecasts from various weather agencies were combined with our hydrology model (which has high-resolution digital elevation maps of the city) and information on drainage systems and lakes.”

Real-time rainfall data are obtained through a network of 100 automatic rain gauges and 25 water level sensors set up on storm water drains at various flood-vulnerable areas across Bengaluru. The model, however, is unable to make reliable predictions if the rainfall is sudden and didn’t appear in the forecast, Mujumdar added.

Scaling Up

“The forecast model has served as a better decision support system for administrative authorities in disaster preparedness, postflood recovery, and response actions in heavy rain events.”Raj Bhagat Palanichamy is a senior manager at the Sustainable Cities initiative of the World Resources Institute who was not involved in flood forecasting in Chennai or Bengaluru. He had a sober view of the projects. “A good model is not about the tech or visualization that come with it,” he said. Instead, it’s “about the ability to help in the decisionmaking process, which hasn’t been successfully demonstrated in India.”

Shubha Avinash, scientific officer at the Karnataka State Natural Disaster Monitoring Centre, said the forecasting model was still an effective tool: “The forecast model has served as a better decision support system for administrative authorities in disaster preparedness, postflood recovery, and response actions in heavy rain events faced by the city in recent years.” Avinash oversees the operation of the Bengaluru flood model.

Avinash added that the alerts help city officials take timely, location-specific action. For instance, the city power company (Bangalore Electricity Supply Company Limited, or BESCOM) makes use of the wind speed and direction forecasts to ascertain which areas would have a probability of fallen electric lines and shuts down power supply to ensure safety.

The tool also has a mobile application, Bengaluru Megha Sandesha (BMS), which can be accessed by officials and residents for real-time information of rainfall and flooding.

Mujumdar added that “short-duration, high-intensity floods are increasing in Indian cities and happen very quickly (within 15–20 minutes) due to climate change and urbanization. Similar models should be developed for all cities.”

Last year, India’s Ministry of Earth Sciences developed a flood warning system, iFLOWS-Mumbai, for Mumbai, which is likely to be operational this year.

“Cities need to have a proper road map,” Bhagat said, “with not just the model as the target but an integrated response plan (both short term and long term). It should start with the creation and seamless sharing of related data in the public domain.”

—Deepa Padmanaban (@deepa_padma), Science Writer

Vestiges of a Volcanic Arc Hidden Within Chicxulub Crater

Tue, 06/15/2021 - 12:23

About 66 million years ago, an asteroid hurtled through Earth’s atmosphere at approximately 20 kilometers per second—nearly 100 times the speed of sound—and slammed into water and limestone off Mexico’s Yucatán Peninsula, catalyzing the demise of the dinosaurs. The solid rock hit by the asteroid momentarily behaved like a liquid, said University of Texas at Austin geophysicist Sean Gulick. Almost instantaneously, a massive transient crater extended to the mantle, and rocks from 10 kilometers deep rushed to the sides of the hole. They slid back toward the crater’s center and shot 20 kilometers into the air before collapsing outward again. As the rock flowed outward, it regained its strength and formed a peak ring, resulting in mountains encircling the center of the 200-kilometer-wide Chicxulub crater.

The story of these rocks “turned out to be completely separate from the story of the impact crater.”In 2016, at a cost of $10 million, scientists participating in International Ocean Discovery Program Expedition 364, in collaboration with the International Continental Scientific Drilling Program, extracted an 835-meter-long drill core from the Chicxulub crater. The drill core includes 600 meters of the peak ring, said Gulick, who serves as co–chief scientist of Expedition 364.

In a recent study published in the Geological Society of America Bulletin, Catherine Ross, a doctoral student at the University of Texas at Austin; Gulick; and their coauthors determined the age of the peak ring granites—334 million years old—and unraveled an unexpected history of arc magmatism and supercontinent reconstruction. The story of these rocks, said Gulick, “turned out to be completely separate from the story of the impact crater.” The tale is told by tiny crystals of zircon—small clocks within rocks—that record various chapters of Earth’s history.

Getting Past a Shocking Impact

As a melt solidifies, said Ross, zirconium, oxygen, and silicon atoms find each other to form zircon. Trace atoms of radioactive uranium easily swap places with zirconium while excluding lead (the product of uranium decay). By measuring both uranium and lead, geochronologists like Ross can calculate when lead began to accumulate in the crystal. In zircons of granitoids, this date typically records when the grain crystallized from the melt.

“The energy of Chicxulub is equivalent to 10 billion times the size of a World War II era nuclear bomb.”The drill core granites, however, harbor an incredible amount of damage caused by the impact’s shock wave. “The energy of Chicxulub is equivalent to 10 billion times the size of a World War II era nuclear bomb,” said Gulick. Highly damaged zircons from the peak ring yield the impact age, he said, but “once you go below those highest shocked grains, you more faithfully record the original age and not the impact age.”

The zircons that Ross and colleagues targeted lacked microstructures that indicate shock, said Maree McGregor, a planetary scientist at the University of New Brunswick who was not involved in this study. “A lot of people would overlook this material when they’re trying to understand impact cratering,” she said, because past studies focused heavily on the impact age and not the history of the target rocks.

Ross incrementally bored into 835 individual zircons with a laser, measuring age as a function of depth to differentiate age domains. “Being able to visualize the data and separate [them] in that way is…critical when you’re trying to establish different ages for different regional tectonic events,” said McGregor.

(a) The amalgamation of Pangea. Laurentia, in brown, lies to the north. Gondwana, shown in gray, lies to the south. Numerous terranes, shown in purple, are caught between the two continents. The Yucatán lies in the midst of these terranes, and a pink star indicates the Chicxulub impact site. CA = Colombian Andes; Coa = Coahuila; M = Merida terrane; Mx = Mixteca; Oax = Oaxaquia; SM = Southern Maya. (b) A simplified cross section through Laurentia, the Rheic Ocean, and subduction off the edge of the Yucatán crust. The Rheic Ocean must subduct below the Yucatán to create the arc magmatism responsible for the zircons Ross analyzed. Ga = giga-annum; Ma = mega-annum. Credit: Ross et al., 2021, https://doi.org/10.1130/B35831.1 Ancient Ocean, Volcanic Arc

In addition to the 334-million-year-old Carboniferous zircons, Ross found three older populations. Crystals with ages ranging from 1.3 billion to 1 billion years ago fingerprint the formation of the supercontinent Rodinia. After Rodinia fragmented, 550-million-year-old zircons place the Yucatán crust near the mountainous margins of the West African craton, which was part of the supercontinent Gondwana. Zircons between 500 million and 400 million years old document deformation as these crustal bits moved across the ancient Rheic Ocean toward Laurentia, which today corresponds to the North American continental core, Ross said.

As the Rheic oceanic slab subducted, fluids drove partial melting that powered a volcanic arc on the edge of the Yucatán crust, said Ross. Using trace element geochemistry from individual grains, she found that in spite of their tumultuous impact history, Carboniferous zircons preserve volcanic arc signatures.

This research, said coauthor and geochronologist Daniel Stockli, is very tedious micrometer-by-micrometer work. But ultimately, he said, these finely detailed data illuminate processes at the scale of plate tectonics.

—Alka Tripathy-Lang (@DrAlkaTrip), Science Writer

Magma Pockets Lie Stacked Beneath Juan de Fuca Ridge

Mon, 06/14/2021 - 12:48

Off the coast of the U.S. Pacific Northwest, at the Juan de Fuca Ridge, two tectonic plates are spreading apart at a speed of 56 kilometers per 1 million years. As they spread, periodic eruptions of molten rock give rise to new oceanic crust. Seismic images captured by Carbotte et al. now provide new insights into the dynamics of magma chambers that feed these eruptions.

The new research builds on earlier investigations into magma chambers that underlie the Juan de Fuca Ridge as well as other sites of seafloor spreading. Sites of fast and intermediate spreading are typically fed by a thin, narrow reservoir of molten magma—the axial melt lens—that extends along the ridge at an intermediate depth in the oceanic crust, but still well above the mantle.

Recent evidence suggests that some seafloor spreading sites around the world contain additional magma chambers beneath the axial melt lens. These additional chambers are stacked one above another in the “crystal mush zone,” an area of the actively forming oceanic crust that contains a low ratio of melted rock to crystallized rock.

Beneath the Axial Seamount portion of the Juan de Fuca Ridge (the site of an on-axis hot spot, which is a different tectonic setting compared with the rest of the ridge), a 2020 investigation showed evidence of stacked magma chambers in the crystal mush zone beneath the large magma reservoir that underlies this on-axis hot spot. Carbotte et al. applied multichannel seismic imaging data collected aboard the R/V Maurice Ewing and found geophysical evidence for these stacked chambers along normal portions of the ridge not influenced by the hot spot.

The new imaging data reveal several stacked magma chambers in the crystal mush zone at each of the surveyed sites. These chambers extend along the length of the ridge for about 1–8 kilometers, and the shallowest chambers lie about 100–1,200 meters below the axial melt lens.

These findings, combined with other geological and geophysical observations, suggest that these stacked chambers are short-lived and may arise during periods when the crystal mush zone undergoes compaction and magma is replenished from the mantle below. The chambers do not cool and crystallize in place, but instead are tapped and contribute magma to eruptions and other crust-building processes.

Further research could help confirm and clarify the role played by these stacked chambers in the dynamics of seafloor spreading. (Journal of Geophysical Research: Solid Earth, https://doi.org/10.1029/2020JB021434, 2021)

—Sarah Stanley, Science Writer

Deploying a Submarine Seismic Observatory in the Furious Fifties

Mon, 06/14/2021 - 12:48

On 23 May 1989, a violent earthquake rumbled through the remote underwater environs near Macquarie Island, violently shaking the Australian research station on the island and causing noticeable tremors as far away as Tasmania and the South Island of New Zealand. The seismic waves it generated rippled through and around the planet, circling the surface several times before dying away.

For 2 weeks, we sat in small, individual hotel rooms quarantining amid the COVID-19 pandemic and ruminating about our long-anticipated research voyage to the Macquarie Ridge Complex.Seismographs everywhere in the world captured the motion of these waves, and geoscientists immediately analyzed the recorded waveforms. The magnitude 8.2 strike-slip earthquake had rocked the Macquarie Ridge Complex (MRC), a sinuous underwater mountain chain extending southwest from the southern tip of New Zealand’s South Island. The earthquake’s great magnitude—it was the largest intraoceanic event of the 20th century—and its slip mechanism baffled the global seismological community: Strike-slip events of such magnitude typically occur only within thick continental crust, not thin oceanic crust.

Fast forward a few decades: For 2 weeks in late September and early October 2020, nine of us sat in small, individual rooms in a Hobart, Tasmania, hotel quarantining amid the COVID-19 pandemic and ruminating about our long-anticipated research voyage to the MRC. It was hard to imagine a more challenging place than the MRC—in terms of extreme topographic relief, heavy seas, high winds, and strong currents—to deploy ocean bottom seismometers (OBSs). But the promise of unexplored territory and the possibility of witnessing the early stages of a major tectonic process had us determined to carry out our expedition.

Where Plates Collide

Why is this location in the Southern Ocean, halfway between Tasmania and Antarctica, so special? The Macquarie archipelago, a string of tiny islands, islets, and rocks, only hints at the MRC below, which constitutes the boundary between the Australian and Pacific plates. Rising to 410 meters above sea level, Macquarie Island is the only place on Earth where a section of oceanic crust and mantle rock known as an ophiolite is exposed above the ocean basin in which it originally formed. The island, listed as a United Nations Educational, Scientific and Cultural Organization World Heritage site primarily because of its unique geology, is home to colonies of seabirds, penguins, and elephant and fur seals.

Yet beneath the island’s natural beauty lies the source of the most powerful submarine earthquakes in the world not associated with ongoing subduction, which raises questions of scientific and societal importance. Are we witnessing a new subduction zone forming at the MRC? Could future large earthquakes cause tsunamis and threaten coastal populations of nearby Australia and New Zealand as well as others around the Indian and Pacific Oceans?

Getting Underway at Last

As we set out from Hobart on our expedition, the science that awaited us helped overcome the doubts and thoughts of obstacles in our way. The work had to be done. Aside from the fundamental scientific questions and concerns for human safety that motivated the trip, it had taken a lot of effort to reach this place. After numerous grant applications, petitions, and copious paperwork, the Marine National Facility (MNF) had granted us ship time on Australia’s premier research vessel, R/V Investigator, and seven different organizations were backing us with financial and other support.

After a 6-month delay, the expedition set out for its destination above the Macquarie Ridge Complex. Credit: Hrvoje Tkalčić

The expedition was going to be anything but smooth sailing, a fact we gathered from the expression on the captain’s face and the serious demeanor of the more experienced sailors.COVID-19 slowed us down, delaying the voyage by 6 months, so we were eager to embark on the 94-meter-long, 10-story-tall Investigator. The nine scientists, students, and technicians from Australian National University’s Research School of Earth Sciences were about to forget their long days in quarantine and join the voyage’s chief scientist and a student from the University of Tasmania’s Institute for Marine and Antarctic Studies (IMAS).

Together, the 11 of us formed the science party of this voyage, a team severely reduced in number by pandemic protocols that prohibited double berthing and kept all non-Australia-based scientists, students, and technicians, as well as two Australian artists, at home. The 30 other people on board with the science team were part of the regular seagoing MNF support team and the ship’s crew.

The expedition was going to be anything but smooth sailing, a fact we gathered from the expression on the captain’s face and the serious demeanor of the more experienced sailors gathered on Investigator’s deck on the morning of 8 October.

The Furious Fifties

An old sailor’s adage states, “Below 40 degrees south, there is no law, and below 50 degrees south, there is no God.”

Spending a rough first night at sea amid the “Roaring Forties,” many of us contemplated how our days would look when we reached the “Furious Fifties.” The long-feared seas at these latitudes were named centuries ago, during the Age of Sail, when the first long-distance shipping routes were established. In fact, these winds shaped those routes.

Hot air that rises high into the troposphere at the equator sinks back toward Earth’s surface at about 30°S and 30°N latitude (forming Hadley cells) and then continues traveling poleward along the surface (Ferrel cells). The air traveling between 30° and 60° latitude gradually bends into westerly winds (flowing west to east) because of Earth’s rotation. These westerly winds are mighty in the Southern Hemisphere because, unlike in the Northern Hemisphere, no large continental masses block their passage around the globe.

These unfettered westerlies help develop the largest oceanic current on the planet, the Antarctic Circumpolar Current (ACC), which circulates clockwise around Antarctica. The ACC transports a flow of roughly 141 million cubic meters of water per second at average velocities of about 1 meter per second, and it encompasses the entire water column from sea surface to seafloor.

Our destination on this expedition, where the OBSs were to be painstakingly and, we hoped, precisely deployed to the seafloor over about 25,000 square kilometers, would put us right in the thick of the ACC.

Mapping the World’s Steepest Mountain Range

Much as high-resolution maps are required to ensure the safe deployment of landers on the Moon, Mars, and elsewhere in the solar system, detailed bathymetry would be crucial for selecting instrument deployment sites on the rugged seafloor of the MRC. Because the seafloor in this part of the world had not been mapped at high resolution, we devoted considerable time to “mowing the lawn” with multibeam sonar and subbottom profiling before deploying each of our 29 carefully prepared OBSs—some also equipped with hydrophones—to the abyss.

Mapping was most efficient parallel to the north-northeast–south-southwest oriented MRC, so we experienced constant winds and waves from westerly vectors that struck Investigator on its beam. The ship rolled continuously, but thanks to its modern autostabilizing system, which transfers ballast water in giant tanks deep in the bilge to counteract wave action, we were mostly safe from extreme rolls.

Nevertheless, for nearly the entire voyage, everything had to be lashed down securely. Unsecured chairs—some of them occupied—often slid across entire rooms, offices, labs, and lounges. In the mess, it was rare that we could walk a straight path between the buffet and the tables while carrying our daily bowl of soup. Solid sleep was impossible, and the occasional extreme rolls hurtled some sailors out of their bunks onto the floor.

The seismologists among us were impatient to deploy our first OBS to the seafloor, but they quickly realized that mapping the seafloor was a crucial phase of the deployment. From lower-resolution bathymetry acquired in the 1990s, we knew that the MRC sloped steeply from Macquarie Island to depths of about 5,500 meters on its eastern flank.

Fig. 1. Locations of ocean bottom seismometers are indicated on this new multibeam bathymetry map from voyage IN2020-V06. Dashed red lines indicate the Tasmanian Macquarie Island Nature Reserve–Marine Area (3-nautical-mile zone), and solid pink lines indicate the Commonwealth of Australia’s Macquarie Island Marine Park. Pale blue-gray coloration along the central MRC indicates areas not mapped. The inset shows the large map area outlined in red. MBES = multibeam echo sounding. Click image for larger version.

We planned to search for rare sediment patches on the underwater slopes to ensure that the OBSs had a smooth, relatively flat surface on which to land. This approach differs from deploying seismometers on land, where one usually looks for solid bedrock to which instruments can be secured. We would rely on the new, near-real-time seafloor maps in selecting OBS deployment sites that were ideally not far from the locations we initially mapped out.

However, the highly detailed bathymetric maps we produced revealed extraordinarily steep and hazardous terrain (Figure 1). The MRC is nearly 6,000 meters tall but only about 40 kilometers wide—the steepest underwater topography of that vertical scale on Earth. Indeed, if the MRC were on land, it would be the most extreme terrestrial mountain range on Earth, rising like a giant wall. For comparison, Earth’s steepest mountain above sea level is Denali in the Alaska Range, which stands 5,500 meters tall from base to peak and is 150 kilometers wide, almost 4 times wider than the MRC near Macquarie Island.

A Carefully Configured Array

Seismologists can work with single instruments or with configurations of multiple devices (or elements) called arrays. Each array element can be used individually, but the elements can also act together to detect and amplify weak signals. Informed by our previous deployments of instrumentation on land, we designed the MRC array to take advantage of the known benefits of certain array configurations.

The northern part of the array is classically X shaped, which will allow us to produce depth profiles of the layered subsurface structure beneath each instrument across the ridge using state-of-the-art seismological techniques. The southern segment of the array has a spiral-arm shape, an arrangement that enables efficient amplification of weak and noisy signals, which we knew would be an issue given the high noise level of the ocean.

Our array’s unique location and carefully designed shape will supplement the current volumetric sampling of Earth’s interior by existing seismic stations, which is patchy given that stations are concentrated mostly on land. It will also enable multidisciplinary research on several fronts.

The continuous recordings from our ocean bottom seismometers will illuminate phenomena occurring deep below the MRC as well as in the ocean above it.For example, in the field of neotectonics, the study of geologically recent events, detailed bathymetry and backscatter maps of the MRC are critical to marine geophysicists looking to untangle tectonic, structural, and geohazard puzzles of this little explored terrain. The most significant puzzle concerns the origin of two large underwater earthquakes that occurred nearby in 1989 and 2004. Why did they occur in intraplate regions, tens or hundreds of kilometers away from the ridge? Do they indicate deformation due to a young plate boundary within the greater Australia plate? The ability of future earthquakes and potential submarine mass wasting to generate tsunamis poses other questions: Would these hazards present threats to Australia, New Zealand, and other countries? Data from the MRC observatory will help address these important questions.

The continuous recordings from our OBSs will also illuminate phenomena occurring deep below the MRC as well as in the ocean above it. The spiral-arm array will act like a giant telescope aimed at Earth’s center, adding to the currently sparse seismic coverage of the lowermost mantle and core. It will also add to our understanding of many “blue Earth” phenomena, from ambient marine noise and oceanic storms to glacial dynamics and whale migration.

Dealing with Difficulties

The weather was often merciless during our instrument deployments. We faced gale-strength winds and commensurate waves that forced us to heave to or shelter in the lee of Macquarie Island for roughly 40% of our time in the study area. (Heaving to is a ship’s primary heavy weather defense strategy at sea; it involves steaming slowly ahead directly into wind and waves.)

Macquarie Island presents a natural wall to the westerly winds and accompanying heavy seas, a relief for both voyagers and wildlife. Sheltering along the eastern side of the island, some of the crew spotted multiple species of whales, seals, and penguins.

As we proceeded, observations from our new seafloor maps necessitated that we modify our planned configuration of the spiral arms and other parts of the MRC array. We translated and rotated the array toward the east side of the ridge, where the maps revealed more favorable sites for deployment.

However, many sites still presented relatively small target areas in the form of small terraces less than a kilometer across. Aiming for these targets was a logistical feat, considering the water depths exceeding 5,500 meters, our position amid the strongest ocean current on Earth, and unpredictable effects of eddies and jets produced as the ACC collides head-on with the MRC.

Small target areas, deep water, strong currents and winds, and high swells made accurate placement of the seismometers difficult. Credit: Hrvoje Tkalčić

To place the OBSs accurately, we first attempted to slowly lower instruments on a wire before releasing them 50–100 meters above the seafloor. However, technical challenges with release mechanisms soon forced us to abandon this method, and we eventually deployed most instruments by letting them free-fall from the sea surface off the side of the ship. This approach presented its own logistical challenge, as we had accurate measurements of the currents in only the upper few hundred meters of the water column.

In the end, despite prevailing winds of 30–40 knots, gusts exceeding 60 knots, and current-driven drifts in all directions of 100–4,900 meters, we found sufficient windows of opportunity to successfully deploy 27 of 29 OBSs at depths from 520 to 5,517 meters. Although we ran out of time to complete mapping the shallow crest of the MRC north, west, and south of Macquarie Island, we departed the study area on 30 October 2020 with high hopes.

Earlier this year, we obtained additional support to install five seismographs on Macquarie Island itself that will complement the OBS array. Having both an onshore and offshore arrangement of instruments operating simultaneously is the best way of achieving our scientific goals. The land seismographs tend to record clearer signals, whereas the OBSs provide the spatial coverage necessary to image structure on a broader scale and more accurately locate earthquakes.

Bringing the Data Home

The OBSs are equipped with acoustic release mechanisms and buoyancy to enable their return to the surface in November 2021, when we’re scheduled to retrieve them and their year’s worth of data and to complete our mapping of the MRC crest from New Zealand’s R/V Tangaroa. In the meantime, the incommunicado OBSs will listen to and record ground motion from local, regional, and distant earthquakes and other phenomena.

Despite the difficulties, the OBS array is now in place and collecting data, and it has been augmented by a new land-based seismometer array. Credit: Millard Coffin

With the data in hand starting late this year, we’ll throw every seismological and marine geophysical method we can at this place. The recordings will be used to image crustal, mantle, and core structure beneath Macquarie Island and the MRC and will enable better understanding of seismic wave propagation through these layers.

Closer to the seafloor, new multibeam bathymetry/backscatter, subbottom profiler, gravity, and magnetics data will advance understanding of the neotectonics of the MRC. These data will offer vastly improved views of seafloor habitats, thus contributing to better environmental protection and biodiversity conservation in the Tasmanian Macquarie Island Nature Reserve–Marine Area that surrounds Macquarie Island and the Commonwealth of Australia’s Macquarie Island Marine Park east of Macquarie Island and the MRC.

Results from this instrument deployment will also offer insights into physical mechanisms that generate large submarine earthquakes, crustal deformation, and tectonic strain partitioning at convergent and obliquely convergent plate boundaries. We will compare observed seismic waveforms with those predicted from numerical simulations to construct a more accurate image of the subsurface structure. If we discover, for example, that local smaller- or medium-sized earthquakes recorded during the experiment have significant dip-slip components (i.e., displacement is mostly vertical), it’s possible that future large earthquakes could have similar mechanisms, which increases the risk that they might generate tsunamis. This knowledge should provide more accurate assessments of earthquake and tsunami potential in the region, which we hope will benefit at-risk communities along Pacific and Indian Ocean coastlines.

Scientifically, the most exciting payoff of this project may be that it could help us add missing pieces to one of the biggest puzzles in plate tectonics: how subduction begins.Scientifically, the most exciting payoff of this project may be that it could help us add missing pieces to one of the biggest puzzles in plate tectonics: how subduction begins. Researchers have grappled with this question for decades, probing active and extinct subduction zones around the world for hints, though the picture remains murky.

Some of the strongest evidence of early-stage, or incipient, subduction comes from the Puysegur Ridge and Trench at the northern end of the MRC, where the distribution of small earthquakes at depths less than 50 kilometers and the presence of a possible subduction-related volcano (Solander Island) suggest that the Australian plate is descending beneath the Pacific plate. Incipient subduction has also been proposed near the Hjort Ridge and Trench at the southern end of the MRC. Lower angles of oblique plate convergence and a lack of trenches characterize the MRC between Puysegur and Hjort, so it is unclear whether incipient subduction is occurring along the entire MRC.

Testing this hypothesis is impossible because of a lack of adequate earthquake data. The current study, involving a large array of stations capable of detecting even extremely small seismic events, is crucial in helping to answer this fundamental question.

Acknowledgments

We thank the Australian Research Council, which awarded us a Discovery Project grant (DP2001018540). We have additional support from ANSIR Research Facilities for Earth Sounding and the U.K.’s Natural Environment Research Council (grant NE/T000082/1) and in-kind support from Australian National University, the University of Cambridge, the University of Tasmania, and the California Institute of Technology. Geoscience Australia; the Australian Antarctic Division of the Department of Agriculture, Water and the Environment; and the Tasmania Parks and Wildlife Service provided logistical support to install five seismographs on Macquarie Island commencing in April 2021. Unprocessed seismological data from this work will be accessible through the ANSIR/AuScope data management system AusPass 2 years after the planned late 2021 completion of the experimental component. Marine acoustics, gravity, and magnetics data, both raw and processed, will be deposited and stored in publicly accessible databases, including those of CSIRO MNF, the IMAS data portal, Geoscience Australia, and the NOAA National Centers for Environmental Information.

Author Information

Hrvoje Tkalčić (hrvoje.tkalcic@anu.edu.au) and Caroline Eakin, Australian National University, Canberra; Millard F. Coffin, University of Tasmania, Hobart, Australia; Nicholas Rawlinson, University of Cambridge, U.K.; and Joann Stock, California Institute of Technology, Pasadena

Observations from Space and Ground Reveal Clues About Lightning

Fri, 06/11/2021 - 12:47

Capturing the fleeting nature and order of lightning and energy pulses has been a goal of many studies over the past 3 decades. Although bolts of white lightning and colorful elves (short for emissions of light and very low frequency perturbations due to electromagnetic pulse sources) can be seen with the naked eye, the sheer speed and sequence of events can make differentiating flashes difficult.

In particular, researchers want to understand the timing of intracloud lightning, elves, terrestrial gamma ray flashes (TGFs), and energetic in-cloud pulses. Do all of these energy pulses occur at the same time, or are there leaders, or triggers, for lightning events?

This video is related to new research that uncovers the timing and triggering of high-energy lightning events in the sky, known as terrestrial gamma ray flashes and elves. Credit: Birkeland Centre for Space Science and MountVisual

In a new study, Østgaard et al. observed lightning east of Puerto Rico. They used optical monitoring along with gamma ray and radio frequency monitoring from the ground to determine the sequence of an elve produced by electromagnetic waves from an energetic in-cloud pulse, the optical pulse from the hot leader, and a terrestrial gamma ray flash.

The Atmosphere–Space Interactions Monitor (ASIM) is mounted on the International Space Station and includes a gamma ray sensor along with a multispectral imaging array. The optical measurements captured the lightning and ultraviolet measurements of the elves. The gamma ray instruments measured TGFs. In Puerto Rico, the researchers measured low-frequency radio measurements from lightning.

The team found that by using this combined monitoring technique, they could observe high-resolution details about the timing and optical signatures of TGFs, lightning, and elves. They found that the TGF and the first elve were produced by a positive intracloud lightning flash and an energetic in-cloud pulse, respectively. Just 456 milliseconds later, a second elve was produced by a negative cloud-to-ground lightning flash about 300 kilometers south of the first elve.

This combination of observations is unique and unprecedented. The detailed observations suggest that coordinated monitoring is the future method for lightning and thunderstorm research efforts. (Journal of Geophysical Research: Atmospheres, https://doi.org/10.1029/2020JD033921, 2021)

—Sarah Derouin, Science Writer

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer