Feed aggregator

Aumento de la equidad en los espacios verdes de la ciudad

EOS - Thu, 06/17/2021 - 12:13

This is an authorized translation of an Eos article. Esta es una traducción al español autorizada de un artículo de Eos.

A medida que la pandemia de COVID-19 se extendía hacia los meses de verano de 2020, la gente de todo el mundo comenzó a acudir en masa a los espacios verdes al aire libre en las ciudades y sus alrededores. Para algunos, el desahogo seguro y socialmente distanciado del encierro consistió en hacer picnics en parques cercanos, caminar por vecindarios con árboles, caminar por senderos a través de las montañas y bosques, o simplemente tomar aire fresco en sus propios patios traseros. Sin embargo, no todos los residentes de la ciudad tienen el mismo acceso, geográfica e históricamente, a espacios verdes cercanos.

Esta época tumultuosa ha “dejado claro la gran importancia de tener un espacio verde seguro en cada vecindario”, dijo Sharon J. Hall , quien investiga la intersección de la gestión de ecosistemas, la calidad ambiental y el bienestar humano en la Universidad Estatal de Arizona (ASU, por sus siglas en inglés), en Tempe. “Sabemos que la naturaleza trae beneficios para la salud mental, beneficios físicos, conexión espiritual y comunitaria, y todo tipo de beneficios recreativos y culturales, pero no todas las personas sienten lo mismo por la naturaleza. Hay poblaciones que tienen historias, problemas y desafíos realmente largos con la naturaleza y lo que la naturaleza significa para ellas”.

El desarrollo de nuevos espacios verdes urbanos (lugares cubiertos de césped, árboles, arbustos u otra vegetación) y la infraestructura que funciona con ellos es una prioridad en muchas ciudades en estos días. Sin embargo, los expertos coinciden en que la solución es más complicada que simplemente plantar más árboles en ciertos puntos. Si se hace correctamente, agregar nuevos espacios verdes a nuestras ciudades y sus alrededores puede mejorar la salud humana, revitalizar los ecosistemas e impulsar la economía de una región. Si se hace mal, puede empeorar los problemas socioeconómicos y ecológicos existentes o incluso crear otros nuevos.

Los bosques urbanos benefician a los residentes de la ciudad

Los espacios verdes dentro y alrededor de las ciudades, conocidos colectivamente como bosques urbanos, pueden mitigar las inundaciones regionales y locales producto de tormentas, reducir la escasez de agua, mejorar la calidad del aire y del agua, regular la temperatura y ayudar al ciclo de los nutrientes del suelo, todo esto mientras secuestran carbono.

Cada árbol de ese bosque es importante. Con todo su acero, asfalto y hormigón, las ciudades suelen ser unos pocos grados más calientes en promedio que la tierra no urbanizada a sus alrededores, un fenómeno conocido como el efecto isla de calor urbano. El mismo fenómeno ocurre en una escala suburbana a un grado que depende del espacio.

“Los árboles son un factor muy importante para reducir el calor en los vecindarios”, explicó Fushcia-Ann Hoover, una hidróloga urbana cuya investigación se basa en la justicia ambiental. Es investigadora postdoctoral en el Centro Nacional de Síntesis Socio-Ambiental en Annapolis, Maryland. “Si un árbol da sombra a parte de tu casa o gran parte de tu vecindario, este será más fresco que los vecindarios donde no hay ningún árbol en la cuadra.”

“Para tener un impacto social equitativo (los espacios verdes) necesitan estar distribuidos de una manera en la que todas las comunidades obtengan beneficios de ellos”.Además, “existen beneficios culturales de tener espacios verdes dentro y alrededor de tu comunidad”, dijo John-Rob Pool, “para el esparcimiento y la recreación, lo que ha demostrado mejorar la salud y el bienestar de las personas, y para crear calles que son más habitables y accesibles”. Pool es el gerente de implementación de Cities4Forests, un programa internacional que ayuda a las ciudades a conservar, administrar y restaurar sus bosques.

Combinados, estos servicios ecosistémicos “son los beneficios más generales de los espacios verdes”, dijo Ayushi Trivedi, analista de investigación de género y equidad social en el Instituto Mundial de Recursos, “pero para tener un impacto socialmente equitativo, deben distribuirse de una manera que todos las comunidades obtengan beneficios de ellos. Esto es especialmente importante para las comunidades vulnerables (comunidades marginadas, poblaciones de bajos ingresos, comunidades de minorías raciales) que viven en vecindarios que están más expuestos al calentamiento, las inundaciones de aguas pluviales y la contaminación”.

¿Dónde están los espacios verdes?

La justicia ambiental afirma que todas las personas tienen derecho a la tierra, el agua y el aire limpios y seguros; requiere una política ambiental que esté libre de discriminación y prejuicios y se base en el respeto mutuo y la justicia para todas las personas. Al evaluar si todos los residentes de una ciudad tienen un acceso equitativo a los bosques urbanos , la primera pregunta a responder es: ¿Dónde tiene la ciudad espacios verdes? Para abordar esto a escala de toda la ciudad, la mayoría de los investigadores recopilan imágenes satelitales o aéreas, que pueden medir hasta una escala determinada, o realizan laboriosos estudios sobre el terreno.

Debido a las limitaciones de los métodos de recopilación de datos, la mayoría de los estudios que analizan la distribución de los espacios verdes urbanos se centran en solo una o dos ciudades a la vez, lo que puede dificultar el análisis de las tendencias a nivel nacional. “La cantidad de trabajo que se necesita para generar un mapa de cobertura forestal urbana de una sola ciudad es tan increíble que hacer algo a mayor escala puede ser bastante difícil”, explicó Shannon Lea Watkins, investigadora de salud pública centrada en la equidad en salud de la Universidad de Iowa. “Sabemos que el bosque urbano es diferente en todo el país porque el ecosistema es diferente. Así que esperaríamos una cantidad diferente de cobertura de árboles en Filadelfia que en Tulsa”.

“Si desglosas por características sociales demográficas, puedes ver cuáles pueden ser las implicaciones sociales”.Watkins y sus colegas juntaron muchos estudios individuales a un metanálisis en el que combinaron datos de ciudades estadounidenses tanto verdes como escasamente boscosas. Trivedi dijo que tales métodos pueden ayudar a los investigadores y urbanistas a identificar qué grupos se benefician más de un espacio verde existente o planificado. “¿Cuál es su raza? ¿Dónde viven? ¿De qué [relaciones] está compuesto su hogar? Si desglosas por características sociales demográficas, puedes ver cuáles pueden ser las implicaciones sociales. Ya sea que se trate de un mapeo o de un estudio estadístico, el simple hecho de desagregar sus datos y luego ver los patrones que surgen… será muy útil para decirte cuáles son las brechas, quién se beneficia más, quién se ve más afectado por los costos y quién corre más riesgos”.

Por ejemplo, “en la mayoría de los estudios hay un patrón demostrado entre los ingresos y la cubierta forestal urbana; es decir, mayores ingresos se asocian con una mayor cobertura forestal urbana”, explicó Watkins. Es más, en todo el país, la desigualdad racial en la cubierta forestal urbana es mayor en terrenos públicos que en terrenos privados: las residencias privadas con patios y calles arboladas son más comunes en los vecindarios de mayores ingresos y predominantemente blancos, y lo mismo ocurre en un grado aún mayor para los parques de propiedad pública y las áreas boscosas.

El tipo de espacio verde importa

“[Históricamente] los vecindarios marginalizados tienen menos espacios verdes, y el espacio verde que tienen tampoco es de tan alta calidad”.Una vez que sepas dónde están los bosques urbanos, es útil analizar qué forma adoptan, porque no todos los tipos de espacios verdes brindan los mismos beneficios a los residentes cercanos. Hoover, quien fue coautor de un artículo reciente que examina la raza y los privilegios en los espacios verdes, explicó que “[históricamente] los vecindarios marginalizados tienen menos espacios verdes, y el espacio verde que tienen tampoco es de tan alta calidad”.

Los parques, por ejemplo, se ven muy diferentes en áreas urbanas que están más vigiladas, que tienden a ser vecindarios con más personas de color, más personas con inseguridad habitacional o más personas con ingresos más bajos. “Si un árbol bloquea la línea de visión de una cámara de la policía, por ejemplo, el árbol se corta o se poda drásticamente por lo que ya no brinda sombra de manera efectiva” ni refrequesca a el área, dijo Hoover.

En estos vecindarios, “los parques no están necesariamente hechos para ser lugares donde la gente se sienta o se relaja”, explicó Hoover. “Son lugares de paso. Creo que eso también refleja la forma en que se criminaliza a las personas con inseguridad habitacional y la forma en que las ciudades a menudo responden a las personas con inseguridad habitacional al querer evitar que establezcan un campamento o puedan acostarse en un banco”.

Los lotes baldíos que han sido renaturalizados pueden aportar espacios verdes, dijo Theodore Lim, pero los beneficios de ese espacio para la comunidad circundante serán mucho menos estratégicos que los beneficios de un parque planificado. “Uno se desarrolla en condiciones de crecimiento y planificación proactiva, y el otro se desarrolla en condiciones de declive y planificación reactiva”, explicó. “A menudo eres oportunista acerca de dónde puedes obtener servicios de los ecosistemas”. Lim investiga las conexiones entre la tierra, el agua, la infraestructura y las personas en la planificación de la sostenibilidad en el Instituto Politécnico de Virginia y la Universidad Estatal de Blacksburg.

“Los espacios verdes pueden ocurrir en cualquier lugar…. Son estos espacios accidentales intermedios los que a veces son las formas más creativas de pensar en los espacios verdes”.“En las ciudades, creo que debemos ser más integrales con nuestra forma de pensar sobre los espacios verdes”, dijo Hall. “Los espacios verdes pueden ocurrir en cualquier lugar…. Son estos espacios accidentales intermedios los que a veces son las formas más creativas de pensar en los espacios verdes”.

Ya sean proactivos o reactivos, para que beneficien a una comunidad, “los espacios verdes urbanos deben diseñarse caso por caso según el clima, la geografía, las condiciones del suelo y las necesidades de suministro de agua de esa área”, dijo Kimberly Duong, ingeniera de recursos hídricos y directora ejecutiva de Climatepedia. “En una región agrícola, por ejemplo, un espacio verde sostenible probablemente dependería de los ciclos estacionales de precipitación. En una región propensa a la sequía, un espacio verde también podría considerar estrategias de retención de agua”.

“Estaba diseñando una calle verde para [un área cerca de la Universidad de California, Los Ángeles] que incorpora conceptos de sostenibilidad, conceptos de captura de aguas pluviales y conceptos de espacios verdes”, dijo Duong. “Esa región tiene mucho suelo arcilloso”, lo que significaba que instalar pavimento permeable no era una opción porque el agua penetraría en la acera pero no en el suelo. “Pero para otras regiones con suelo más arenoso, donde el agua puede absorberse más fácilmente, un pavimento permeable podría ser una estrategia para un estacionamiento [para capturar aguas pluviales en el sitio]”.

“Hay estrategias en muchas escalas geográficas diferentes”, dijo Duong, desde barriles de lluvia hasta drenajes sostenibles y desde jardines de lluvia hasta cuencas hidrográficas.

La propiedad comunitaria es clave

Los espacios verdes deben diseñarse intencionalmente para satisfacer las necesidades que la comunidad ha identificado para que los residentes se sientan cómodos usándolos. Tal estrategia de diseño requiere el compromiso y el diálogo entre las comunidades y los administradores de proyectos.

“La gente, teóricamente, puede tener la misma cantidad de acceso a acres de espacio en el parque, pero aún no se siente bienvenido o seguro en ese espacio del parque”, dijo Lim. “Se trata de reconocer que hay problemas sistémicos que dan forma a las experiencias de las personas y que tienen raíces realmente históricas”.

Por ejemplo, “un hombre blanco podría irse solo al bosque y obtener todo tipo de beneficios espirituales al estar solo allí”, dijo Hall. Pero para las personas a las que se les ha hecho sentir incómodas o inseguras al aire libre debido a su género, raza u otro aspecto de su identidad, continuó, esa experiencia histórica puede ser muy diferente.

“Las soluciones basadas en la naturaleza deben tratarse como cualquier otra infraestructura y merecen el mismo enfoque participativo durante las etapas de planificación”.También hay relaciones históricas positivas a considerar, agregó. “Podrías pensar en las poblaciones latinas que viven en el suroeste; el desierto podría tener un significado diferente para ellos si tienen una historia con el desierto a través de sus familias y de generaciones”.

“Cuando una ciudad, por ejemplo, planea una nueva estación de tren, se compromete con los residentes sobre dónde deben colocarla, quién la necesita, si la usarán los residentes si la colocan aquí o si la colocan allá”, dijo Pool. “Las soluciones basadas en la naturaleza deben tratarse como cualquier otra infraestructura y merecen el mismo enfoque participativo durante las etapas de planificación. Creo que la razón por la que esto aún no es tan común es que es un campo emergente”.

Muchos residentes de Detroit, por ejemplo, expresaron la creencia de que la ciudad había descuidado o mal administrado los espacios verdes y los árboles en sus vecindarios. Debido a ese precedente histórico, la gente desconfió cuando una organización local sin fines de lucro les ofreció árboles gratis para plantar frente a sus casas. A pesar de querer vecindarios más verdes, una cuarta parte de los residentes rechazó la plantación de nuevos árboles, anticipando que la ciudad también les negaría ese espacio verde.

“No habrá un enfoque único para todos” para crear nuevos espacios verdes urbanos o para garantizar la equidad en esos espacios, dijo Hall. “Lo que va a ser bueno para los polinizadores o la gente en Washington, DC, puede ser muy diferente de lo que va a funcionar en el Desierto de Sonora en Phoenix. Y aún así, la historia de Phoenix es muy diferente a la historia de Albuquerque o Los Ángeles. Los enfoques deberán determinarse localmente, sobre qué tipos de plantas vas a plantar y qué va a ser realmente bueno para la historia de una comunidad”.

Cómo son las soluciones impulsadas por la comunidad

“En realidad, nadie te capacita sobre cómo ser un investigador comunitario. Se aprende haciéndolo”.Digamos que eres un geocientífico con una idea de cómo mejorar un vecindario urbano agregando más espacios verdes y quieres que el proyecto sea un proceso participativo. ¿Cómo logras entonces que la comunidad se sume? “En realidad, nadie te capacita sobre cómo ser un investigador comunitario. Se aprende haciéndolo”, dijo Marta Berbés-Blázquez. “Escaneas las noticias, escaneas Facebook, comienzas a seguir a activistas en una región, comienzas a averiguar quién es quién. Eso lleva un poco de tiempo y mucho es muy sutil”. Berbés-Blázquez investiga las dimensiones humanas de las transformaciones socioecológicas en ecosistemas rurales y urbanos como profesora asistente en ASU.

“Podría ir a un evento comunitario aleatorio”, continuó. “Podría ir a un seminario web o asistir a reuniones comunitarias. Y me sentaría en segundo plano y escucharía y no hablaría”. Al hacer esto, un investigador aprende qué temas están al frente de la agenda de una comunidad, quiénes son los líderes clave y qué problemas históricos o sistémicos enfrenta la comunidad.

Después de que tantos residentes rechazaron los árboles gratuitos, por ejemplo, esa organización sin fines de lucro de Detroit cambió su enfoque para incluir a las comunidades en el proceso de toma de decisiones con respecto a los tipos de árboles y dónde plantarlos. También amplió su programa de empleo juvenil para mantener los árboles y enseñar a los residentes sobre ellos.

“Creo que la tendencia es que los geocientíficos se centren en el análisis de datos”, dijo Duong, “y luego señalarlos y decir: ‘Esto tiene sentido para fines científicos. Tenemos tanto déficit de agua, por lo tanto, llevar a cabo esta estrategia [proporcionaría] el 200% de la cantidad de agua que necesitamos’”. Estos análisis son ingredientes necesarios en cualquier proyecto de infraestructura verde, pero hay otras consideraciones que van más allá del alcance. de la expertise de un geocientífico. “Eso no toma en cuenta las consideraciones políticas, el presupuesto requerido, el mantenimiento requerido o la interrupción de la comunidad durante la construcción. Esos son componentes no triviales de la implementación de proyectos de espacios verdes.”

Al dar un paso atrás y aprender sobre la comunidad antes de iniciar un proyecto, un geocientífico podrá evaluar los riesgos específicos del vecindario, como nuevos espacios verdes atractivos que elevan los alquileres, y establecer medidas para proteger a los residentes de daños. “Tener esos mecanismos en su lugar ha demostrado que se pueden reducir algunas de estas crisis de gentrificación verde que están ocurriendo”, dijo Trivedi.

“La justicia ambiental no es solo una distribución equitativa de los recursos, sino también un acceso equitativo a la toma de decisiones”.Considera, por ejemplo, el proyecto 11th Street Bridge Park de Washington, DC, un parque tipo puente recreativo que cruzará el río Anacostia en el pabellón 7 y pabellón 8, áreas que son mayoritariamente negras y tienen ingresos más bajos que el promedio de DC. Los proyectos de infraestructura verde en vecindarios con demografías similares han creado, en el pasado, crisis de gentrificación que, en última instancia, perjudicaron a los residentes. Los residentes de los pabellones 7 y 8 inicialmente rechazaron el desarrollo de un parque tipo puente en sus vecindarios exactamente por esas razones. En respuesta, los gerentes del proyecto se asociaron con líderes comunitarios para crear estrategias de desarrollo centradas en la equidad: estableciendo fideicomisos de tierras comunitarias, salvaguardando inversiones en viviendas asequibles, brindando capacitación y empleos para los residentes locales e invirtiendo en pequeñas negocios locales.

El proceso de desarrollo conjunto de soluciones no es fácil, dijo Berbés-Blázquez, y la estructura de la investigación académica, como los ciclos de subvenciones o los relojes de permanencia, a menudo puede interferir. “La velocidad a la que tienen que suceder los proyectos, ya sea académica o políticamente, no necesariamente da suficiente tiempo para fomentar relaciones verdaderas, genuinas y de confianza entre los diferentes actores involucrados”, dijo. “No traigas tu propia agenda, pero si la tienes, déjala muy clara. Y luego se paciente” y esté dispuesto a reconocer y reconocer cuando cometes errores.

Organizaciones lideradas por la comunidad que se enfocan en reverdecer las ciudades están trabajando en todo el país, dijo Hoover, y cada una sabe cómo los científicos pueden ayudarles mejor a lograr sus objetivos. “Realmente animaría a otros científicos, planificadores, profesionales e investigadores a que comiencen a escuchar y a comunicarse”, dijo, “para aprender y superar realmente los límites de sus propios campos y sus propias suposiciones dentro de su ciencia”.

“La justicia ambiental no es solo una distribución equitativa de los recursos, sino también un acceso equitativo a la toma de decisiones”, dijo Watkins.

Kimberly M. S. Cartier (@AstroKimCartier ), Escritora de ciencia

This translation by Mariana Mastache Maldonado (@deerenoir) was made possible by a partnership with Planeteando. Esta traducción fue posible gracias a una asociación con Planeteando.

Fingerprints of Jupiter Formation

EOS - Wed, 06/16/2021 - 17:16

Gas giants exert a major control on solar system architecture. Observations of disks like those by the Atacama Large Millimeter/submillimeter Array (ALMA) reveal early stages of planet formation in far-away stellar systems. But timing of giant planet formation processes can best be traced via detailed investigations possible in our solar system. A commentary by Weiss and Bottke [2021] closely examines Jupiter formation, using clues preserved in the meteorite record. They find current data are consistent with an initial “slow growth” phase for Jupiter that created separate isotopic reservoirs for meteorite parent bodies. Subsequently, paleomagnetic data suggest rapid dissipation of the nebular field, most easily explained by rapid (greater than 30 times) growth of Jupiter, which supports a core accretion physical model for giant planets. The case is not closed, however. Weiss and Bottke propose further observations and physical modeling to establish the pacing of Jupiter formation and its effects on the architecture of our solar system.

Citation: Weiss, B. & Bottke, W. [2021]. What Do Meteorites Tell Us About the Formation of Jupiter? AGU Advances, 2, e2020AV000376. https://doi.org/10.1029/2020AV000376

—Bethany Ehlmann, Editor, AGU Advances

Why Contribute to a Scientific Book?

EOS - Wed, 06/16/2021 - 12:38

AGU believes that books still play an important role in the scientific literature and in professional development. As part of our publications program, we continue to publish traditional books but are also seeking to innovate in how we collate, present, and distribute material. However, we are aware that some scientists are skeptical about the value of being involved in book projects, either as a volume editor or as a chapter author. One common concern is that the process of preparing books for publication is much slower than journals. There is also a perception that book content is not as easily discoverable as journal articles. Some people may feel that the era of the book has passed now that technology has changed the ways in which we find and interact with written material. Here we give responses to some of the questions and concerns that we frequently hear and explain advantages of choosing AGU-Wiley for a book project.

Why should I choose to publish a book with AGU?

AGU’s publications program has a strong, authoritative reputation in the Earth and space sciences. This includes a six-decade history of publishing books, with the long-standing Geophysical Monograph Series being the best-known part of the collection. Publishing with AGU is a mark of quality. All individual book chapters undergo full peer review to ensure quality and rigor, and entire book manuscripts undergo a full assessment by a member of the Editorial Board before being approved for publication.

Does AGU only publish scientific monographs?

We offer a home for books on all topics in Earth, environmental, planetary and space sciences, as well as publications that support the geoscience community on topics such as careers, the workforce, and ethics. We publish scientific research, advanced level textbooks, reference works, field guides, technical manuals, and more. We want our books to be relevant and useful for the twenty-first century classroom, laboratory, and workplace so we are open to ideas for different types of books and exploring new ways of publishing material.

What are the advantages of edited books over journal special collections?

While there are some similarities between a special collection of articles in a journal on a particular theme and an edited book, we believe that the book format offers a few distinct advantages. First, books give more space and freedom. You can tell a more complete story in a book by organizing chapters into a deliberate order that presents a narrative arc through all aspects of the topic. Second, books are a great medium for interdisciplinary topics. You can pull together a mixture of material that may not have a comfortable home in a single journal. Participating in a book project is thus an opportunity to go to the borders of your discipline and collaborate with colleagues from other disciplines, including in the social sciences.

What kind of experience will I have as a book editor with AGU-Wiley?

There are staff in the AGU Publications Department and at Wiley dedicated to AGU’s books program. We are committed to offering a great experience to volume editors, chapter authors, and peer reviewers, and to producing books of the highest quality. In addition, the AGU Books Editorial Board, comprising members of the scientific community, is on hand to support editors throughout the publication process. The editors (or authors, if an authored volume) of each book are assigned a member of the Editorial Board to offer 1:1 interaction, feedback, and advice whether you are a first-timer or have prior experience.

How can people find and cite my book content?

Book chapters are much more discoverable these days. AGU’s books are hosted on Wiley Online Library, where whole books and book chapters come up in search results alongside journal articles. Not everyone needs or wants a whole book, so individual book chapters can be downloaded as PDF files. Each book chapter has its own unique DOI making it more discoverable and citable. AGU’s books are also indexed by the major indexing services, such as Web of Science, SCOPUS, and the SAO/NASA Astrophysics Data System, enabling the tracking of citations.

How will my book get promoted?

AGU is a network of 130,000 Earth and space science enthusiasts worldwide. Once a book is published, it will be promoted to the whole network, as well as to targeted subject groups, via blogs, social media, newsletters, and more. At AGU Fall Meeting, your published book will be on display where as many as 30,000 people have the chance to see it. In addition, Wiley, as an international publisher, has a global network for marketing and sales.

How are AGU and Wiley adapting to changes in publishing?

Scholarly books tend to be slightly behind the curve in terms of new technologies but AGU, in partnership with Wiley, are testing new publishing models in response to changes in the landscape of scholarly publishing, science funding, and user demand. For example, in 2020 we piloted the Open Access publishing model for two books (Carbon in Earth’s Interior and Large Igneous Provinces) and are exploring ways to make this a publishing option for anyone with the funding for open access. We are also currently exploring a chapter-by-chapter publication model to make book content available faster.

What are the professional and personal benefits of doing a book?

There is a perception that books are written by people at the end of their careers and that they do not offer the same advantages as journal articles in terms of scholarly value. However, anyone can contribute to a book, and it counts as part of tenure or promotion applications in many places. Acting as an editor of a book is a chance to work with many other scientists, both those writing chapters and those acting as peer reviewers. This is an opportunity to widen your professional network; to work with new people, perhaps from different disciplines; and to make your name more recognizable, including by those who are not directly following your work. Editing a book can also be regarded as service to your scientific community. Your book may become the definitive book in the field and define the rest of your career.

We welcome ideas for new books at any time. Please contact me or a member of the Editorial Board, and we will be happy to discuss.

―Jenny Lunn (jlunn@agu.org; 0000-0002-4731-6876), Director of Publications, AGU

Book Publishing in the Space Sciences

EOS - Wed, 06/16/2021 - 12:38

AGU has been publishing books for over six decades on topics across the Earth and space sciences. The four members of the AGU Books Editorial Board from space science disciplines decided to look at our own backfile of books and at other scientific publishers to better understand the landscape of book publishing in these disciplines and use these insights to form a plan for expanding our range of space science books over the coming years.

Space science books in AGU’s portfolio

Our investigation started with our own backfile, focusing on AGU’s flagship and longest-running series, the Geophysical Monograph Series. 256 volumes were published from its launch in 1956 to the end of 2020, of which, 63 were related to space science topics, with six volumes (Vol. 1, 2, 7, 141, 196, and 214) combining both Earth- and space-related topics in the same volume.

For the next step of our analysis, we divided the space sciences into major topic areas according to four AGU Sections – Aeronomy, Magnetospheric Physics, Planetary Sciences, and Solar & Heliospheric Physics. We decided to combine Aeronomy and Magnetospheric Physics into a single category of Geospace.

Over the entire lifetime of the Geophysical Monograph Series, the proportions of space science books with topics exclusively focused on geospace, planetary sciences, and solar/heliospheric physics are 51%, 7%, and 3.5%, respectively. The remaining 38.5% are books with an interdisciplinary character, that is, they combine the major topic areas in the same volume. (The six books that combine Earth- and space-related topics are not included in these numbers.)

The Venn diagram focuses on the 12 books published in the past decade (2011-2020) (Not included are two books (vols. 196, 214) that combined space and Earth science topics).

Books with a geospace focus are the largest segment (vols. 199, 201, 215, 220, 244, 248) and six additional geospace books have an interdisciplinary focus (vols. 197, 207, 216, 222, 230, 235).

No book has been published in the Geophysical Monograph Series solely focused on planetary or solar/heliospheric topics during the decade, although a five-volume Space Physics and Aeronomy collection was published in spring 2021 outside the bounds of this analysis.

How do we compare to other publishers?

Of course, AGU is not the only society or publisher producing books in the space sciences, so we decided to look at the distribution across major scholarly publishers using the same topic division as above. We made our best efforts to survey the enormous book market using various web tools, but we cannot guarantee the accuracy of these statistics. Also note that AGU entered a publishing partnership with John Wiley & Sons in 2013, thus we are treating Wiley and AGU as one entity in the charts below.

The charts suggest that certain publishers have developed a reputation in particular fields, with existing series or collections that draw return book editors/authors and attract new people. It is important that an editor or author finds the most appropriate publishing partner for their book project who can offer the production services, marketing, and distribution they are looking for. We hope that scientists across all these disciplines will consider AGU-Wiley, as we have a lot to offer.

Where are we heading?

AGU/Wiley have published on average 1 to 2 books per year over the last four decades in the space sciences, with a spread of 0 to 4 books per year. This is a modest publication rate which we would like to increase.

The graph shows the current distribution (orange) shifted by two books per year to a proposed distribution (yellow). During this decade, we would like to see on average 3 to 4 books published per year, but occasionally even more if possible.

We would also like this growth to be more balanced across all fields in the space sciences to move away from the skew towards geospace books seen in the past. By adding one book each on planetary and solar/heliospheric topics, already we would achieve our goal. Hence, we want to grow in these areas, and we are putting enhanced efforts into our outreach to the planetary and solar communities, which might help shift the distribution as indicated in the graph.

How to achieve our goals?

Overall, we wish to engage the space science communities in publishing more books with AGU-Wiley. To achieve our goals, we are actively engaging in outreach by proposing book titles to prospective editors/authors. Nonetheless, our backbone is still the unsolicited approach by scientists with new book ideas. For example, we hope that the new planetary and solar spacecraft missions currently in operation (as well as those being planned) may yield results, analysis, and reviews that could be suitable for publication in book format.

Although we have numerical goals for growing the number of space science books in our portfolio, our focus remains on quality over quantity. After all, books should be useful and in demand by the science community.

We want to encourage scientists who have never considered to publish a book. The process of organizing and writing/editing a book is very rewarding and increases one’s own network of scientific collaboration and reputation. Please contact any member of the Editorial Board from the space sciences directly or email books@agu.org, if you have ideas for new books.

―Andreas Keiling (keiling@berkeley.edu;  0000-0002-8710-5344), Bea Gallardo-Lacourt ( 0000-0003-3690-7547), Xianzhe Jia ( 0000-0002-8685-1484), and Valery Nakariakov ( 0000-0001-6423-8286), Editors, AGU Books

New Editorial Board for AGU Books Takes Inventory

EOS - Wed, 06/16/2021 - 12:37

AGU has been producing books as part of its publications program for over six decades. The goal has been to produce volumes on all topics across the Earth and space sciences that are a valuable resource for researchers, students, and professionals. It is quite a challenge to cover such an enormous scientific space and breadth of audiences. So how have we been doing?

A new Editorial Board for AGU Books was established in 2020. In our first year, we have taken time to look back on our historic backfile of books and evaluate how the program has grown and changed over time. We wanted such data and analysis to shape realistic goals for the books program going forward and focus our outreach to the scientific community for new book ideas. Here we give an overview of all books in our portfolio; a separate piece focuses on books in the space sciences.

Establishment and growth of AGU Books

AGU published its first book in 1956, Volume 1 of the Geophysical Monograph Series (GMS). The GMS remains AGU’s primary and flagship series for scientific work, and is the source of the data presented here, but there are two other active series – Special Publications and the Advanced Textbooks Series – as well as more than a dozen archive series.

From its launch in 1956 to the end of 2020, 256 volumes were published in the GMS.

As shown in the chart on the right, fewer than ten books were published per decade during the first three decades.

This rose to more than 60 books per decade during the three most recent decades, an average of 6 to 7 books per year.

The two main branches: Earth and Space

To gain further insight, we designated these books into “Earth sciences” or “Space sciences,” the two main branches represented in AGU. This simple division makes sense because there is a significant amount of interdisciplinary research within Earth-related and space-related topics, but far less so across the two.

Of the 256 monographs, 77% were related to Earth science topics and 23% were related to space science topics, with just six volumes combining both (and not included in this count). During the past decade (2011-2020), the proportions were 80% and 20%, respectively, as shown in the chart on the right.

For comparison, we looked at the proportion of AGU members according to their primary Section affiliation as at the end of 2020 (excluding cross-cutting sections such as Education, Science & Society, and Earth & Space Science Informatics) and found that the proportion was 88% (Earth) and 12% (Space).

The difference in member affiliation between the two branches largely accounts for the difference in book numbers.

Of course, this dataset is small and limited, just showing books published in one AGU series; it does not reflect all publications by AGU members, such as journal articles and books published with other publishers.

However, we focus this analysis on the past and future of the Geophysical Monograph Series as we want to ensure that it represents the AGU community and produces books that meet their professional needs.

A closer look at the decadal distribution of books reveals further trends. Earth science started with an average of about one book per year until the 1980s, after which the number steadily rose until the 2010s with an average of six books per year. Currently, we are at about five books per year. However, the actual number of books per year has varied between zero and eleven books. In comparison, the space sciences only started to pick up the publication rate in the 1980s, with its heyday being in the 1990s with an average of two books per year. In recent decades, most years we published at least one space science book per year, with the highest in any single year being four books. (The six volumes that contained both Earth and space topics were counted twice here, once for each branch.)

To go a little deeper, we analyzed books by topic within the “Earth” and “Space” categories. This was not a straightforward exercise, as many books straddle multiple topics (for example, should a book about earthquakes be categorized as seismology or natural hazards?) so we had to make certain choices about best fit for primary topic and ensure consistency in this decision-making process. While such simplistic categorization is a little problematic, the results still reveal some trends in the spread of topics covered. The chart shows the breakdown of topics within the Earth sciences; the accompanying piece delves a little deeper into the “space physics” category.

Looking to the future

We hope to maintain, or even grow, the rate of AGU’s book publications. We are looking for new book ideas and for scientists interested and willing to be volume editors or volume authors. We also want to encourage scientists who have never considered the book format as a way to communicate their science. Find out more about the professional advantages of producing a book and the experience of doing this with AGU and Wiley. Please contact us if you have ideas for new books. The AGU Books Editorial Board is here to help and encourage. Contact a member of the Editorial Board directly or email books@agu.org.

―Andreas Keiling (keiling@berkeley.edu;  0000-0002-8710-5344), Editor in Chief, AGU Books; and Jenny Lunn ( 0000-0002-4731-6876), Director of Publications, AGU

A Life at Sea: A Q&A with Robert Ballard

EOS - Wed, 06/16/2021 - 12:36

Robert Ballard—the man who found Titanic—has explored the ocean for more than 60 years with ships, submersibles, and remotely operated vehicles. Now, through the Ocean Exploration Trust, he continues to search the seas for archaeological wonders, geological oddities, and biological beasts.

His new memoir, Into the Deep, takes readers on a vivid tour of his adventures while diving into his struggles—and triumphs—as he navigated academia and the ocean without knowing he was dyslexic.

This conversation has been edited for length and clarity.

 

Eos: How and when did you find out you were dyslexic?

Ballard: I didn’t know I was dyslexic until I was around 62 years old. I’m 79. There’s a beautiful book called The Dyslexic Advantage. I have an audio version, which is much easier for dyslexics. When I listened to it, I cried because it explained me to me for the first time in my life. I knew I was different, and now I understand that difference.

Eos: What advice do you have for those who struggle with dyslexia?

Ballard: The educational experience in many ways for a dyslexic is, How do you survive it? How do you cross through this desert? And fortunately, I had oases along the way, which were teachers that bet on my horse. The real key is surviving the teachers who would rather put you on drugs or would rather not have you in their classroom.

Eos: What can parents do?

Ballard: Early detection is really critical, and also being an advocate.

Eos: What can the education systems do better for children?

Ballard: Make sure that they don’t lose their self-esteem [and] that they don’t believe that they’re stupid. They’re not. They have a gift. We’re such visual creatures.

Eos: How have you come to view being dyslexic as an asset?

“We were just bowled over when we came across those ecosystems. We didn’t even have biologists [on this expedition]! Biologists turned us down.”Ballard: I stare at things until I figure them out. You want eyes on the bottom [of the ocean], and that’s what us dyslexics are all about. I can stand in the middle of my command center, close my eyes and go there…without physically being there.

Eos: Like with Titanic?

Ballard: I wasn’t supposed to find the Titanic. I was on a top-secret mission financed by naval intelligence to look at a nuclear submarine [that sank with] nuclear weapons. I discovered that [the submarine hadn’t] just [imploded and sank to] the bottom. When I went to map it, I found that I could map three corners of it, but then there was a bit like a comet on the bottom of the ocean because the current…created a long debris trail. I said, “Well, wait a minute. Don’t look for the Titanic. Look for this long trail.” This is visual hunting.

Eos: And so you found the Titanic in 1985, for which you became famous! But what about the 1977 discovery of hydrothermal vents—and life, like giant clams—near the Galapagos?

Ballard: We were looking for hot water coming out [of the oceanic ridge]. We were just bowled over when we came across those ecosystems. We didn’t even have biologists [on this expedition]! Biologists turned us down!

Eos: What did you do without biologists or the ability to “Zoom” them onto your ship?

Ballard: We called back to Woods Hole [Oceanographic Institution]. [Biologists there] said, “Take core samples every so many meters.” There’s no sediment. They said, “That’s impossible. You can’t have clams! You can’t have everything you’re talking about!”

“I am sympathetic to anyone who is being told that they’re different. I want everyone in the game.”Eos: What’s the significance of discovering life where it wasn’t supposed to be?

Ballard: Because of this discovery of chemosynthetic life systems that can live in extremely extreme environments, I’m confident there’s life throughout the universe. It’s now driving NASA’s program to look at the larger oceans in our solar system, [like] on Enceladus, that have more water than we have.

Eos: Let’s talk about your current exploits on Nautilus with the Corps of Exploration. In the book, you note that you hire at least 55% women for the corps. What about other groups?

Ballard: I am sympathetic to anyone who is being told that they’re different. I want everyone in the game. I’ve said to my team [that] I want every conceivable kind of person on our team, because I want children to find their face.

A sampling of individuals in the Corps of Exploration, the team that powers Ballard’s adventures on Nautilus. Ballard has instructed his team to ensure that at least 55% of people in the corps are women. Credit: Robert Ballard

Eos: How do you encourage people who aren’t necessarily interested in oceanography or are afraid of the ocean to work with you?

Ballard: Our technology. You don’t have to go out on the ocean if you don’t want to. A lot of what we’re able to do now [is] because of the telepresence technology we’ve pioneered.

Eos: And of course, that technology enhances your ability to communicate with the public, as you discuss in the book. You’ve been at the fore of science communication for most of your career. How has academia changed on that front?

“Science is the luxury of a wealthy nation.”Ballard: It’s less hostile. I was severely criticized [for communicating with the public]. Science is the luxury of a wealthy nation. It’s the taxpayers that pay for you. You need to thank them, and you need to say, “Let me tell you what I’m doing with your money and why it’s important!” That’s storytelling and communicating.

Eos: What advice do you have for scientists who communicate with the public?

Ballard: If you can’t tell a middle school kid what you’re doing, you don’t know what you’re doing. Don’t simplify science. Explain it in a way that people can absorb it. Don’t talk down. Don’t talk up. Talk straight across.

—Alka Tripathy-Lang (@DrAlkaTrip), Science Writer

Indian Cities Prepare for Floods with Predictive Technology

EOS - Tue, 06/15/2021 - 12:30

Urban floods in India have caused fatalities, injuries, displacement, and enormous economic losses. Cities across the country are now investing in high-tech tools to better model and forecast these natural hazards.

In 2015, the metropolis of Chennai faced devastating floods responsible for the deaths of more than 500 people and displacement of more than a million more. Financial losses reached around $3 billion. The extent of the damage prompted the Indian government to approach scientists to develop a flood forecasting system for the city.

Subimal Ghosh, a professor of civil engineering at the Indian Institute of Technology Bombay, led the efforts. Chennai’s topography makes it particularly vulnerable, Ghosh said. In addition to being a coastal city, Chennai has many rivers and an upstream catchment area from which water flows when there is heavy rainfall.

Forecasting in Chennai

The city’s topography determines where inundation occurs and made the development of a flood forecasting system complex. The system had to include the hydrology of the upstream region; river, tidal, and storm surge modeling; and a high-resolution digital elevation map of the city, Ghosh said.

A consortium of scientists from 13 research institutes and government organizations worked on these separate aspects and together developed India’s first fully automated real-time flood forecasting system, launched in 2019.

“We generated 800 scenarios of flood and tide conditions,” Ghosh said. “When the model receives a weather forecast from the National Centre for Medium Range Weather Forecasting, it will search and find the closest scenario. If there is a chance of flood, the model will predict the vulnerable sites for the next 3 days.”

Sisir Kumar Dash is a scientist at the National Centre for Coastal Research (NCCR) in Chennai, which is responsible for the operation of the model. “We analyze daily rainfall data, and if there is a probability of inundation, the model is run, and alerts are sent to the state disaster management department,” he said.

Since the tool was implemented, however, Chennai has not experienced heavy rainfall, so it has not been put to a strong test.

Forecasting in Bengaluru

Bengaluru, formerly known as Bangalore, has seen some success with its own flood forecasting system, according to scientists at the Indian Institute of Science (IISc) in Bengaluru and the Karnataka State Natural Disaster Monitoring Centre. The organizations developed the system together.

P. Mujumdar, a professor of civil engineering at IISc who led the work, said that “short-duration rainfall forecasts from various weather agencies were combined with our hydrology model (which has high-resolution digital elevation maps of the city) and information on drainage systems and lakes.”

Real-time rainfall data are obtained through a network of 100 automatic rain gauges and 25 water level sensors set up on storm water drains at various flood-vulnerable areas across Bengaluru. The model, however, is unable to make reliable predictions if the rainfall is sudden and didn’t appear in the forecast, Mujumdar added.

Scaling Up

“The forecast model has served as a better decision support system for administrative authorities in disaster preparedness, postflood recovery, and response actions in heavy rain events.”Raj Bhagat Palanichamy is a senior manager at the Sustainable Cities initiative of the World Resources Institute who was not involved in flood forecasting in Chennai or Bengaluru. He had a sober view of the projects. “A good model is not about the tech or visualization that come with it,” he said. Instead, it’s “about the ability to help in the decisionmaking process, which hasn’t been successfully demonstrated in India.”

Shubha Avinash, scientific officer at the Karnataka State Natural Disaster Monitoring Centre, said the forecasting model was still an effective tool: “The forecast model has served as a better decision support system for administrative authorities in disaster preparedness, postflood recovery, and response actions in heavy rain events faced by the city in recent years.” Avinash oversees the operation of the Bengaluru flood model.

Avinash added that the alerts help city officials take timely, location-specific action. For instance, the city power company (Bangalore Electricity Supply Company Limited, or BESCOM) makes use of the wind speed and direction forecasts to ascertain which areas would have a probability of fallen electric lines and shuts down power supply to ensure safety.

The tool also has a mobile application, Bengaluru Megha Sandesha (BMS), which can be accessed by officials and residents for real-time information of rainfall and flooding.

Mujumdar added that “short-duration, high-intensity floods are increasing in Indian cities and happen very quickly (within 15–20 minutes) due to climate change and urbanization. Similar models should be developed for all cities.”

Last year, India’s Ministry of Earth Sciences developed a flood warning system, iFLOWS-Mumbai, for Mumbai, which is likely to be operational this year.

“Cities need to have a proper road map,” Bhagat said, “with not just the model as the target but an integrated response plan (both short term and long term). It should start with the creation and seamless sharing of related data in the public domain.”

—Deepa Padmanaban (@deepa_padma), Science Writer

Vestiges of a Volcanic Arc Hidden Within Chicxulub Crater

EOS - Tue, 06/15/2021 - 12:23

About 66 million years ago, an asteroid hurtled through Earth’s atmosphere at approximately 20 kilometers per second—nearly 100 times the speed of sound—and slammed into water and limestone off Mexico’s Yucatán Peninsula, catalyzing the demise of the dinosaurs. The solid rock hit by the asteroid momentarily behaved like a liquid, said University of Texas at Austin geophysicist Sean Gulick. Almost instantaneously, a massive transient crater extended to the mantle, and rocks from 10 kilometers deep rushed to the sides of the hole. They slid back toward the crater’s center and shot 20 kilometers into the air before collapsing outward again. As the rock flowed outward, it regained its strength and formed a peak ring, resulting in mountains encircling the center of the 200-kilometer-wide Chicxulub crater.

The story of these rocks “turned out to be completely separate from the story of the impact crater.”In 2016, at a cost of $10 million, scientists participating in International Ocean Discovery Program Expedition 364, in collaboration with the International Continental Scientific Drilling Program, extracted an 835-meter-long drill core from the Chicxulub crater. The drill core includes 600 meters of the peak ring, said Gulick, who serves as co–chief scientist of Expedition 364.

In a recent study published in the Geological Society of America Bulletin, Catherine Ross, a doctoral student at the University of Texas at Austin; Gulick; and their coauthors determined the age of the peak ring granites—334 million years old—and unraveled an unexpected history of arc magmatism and supercontinent reconstruction. The story of these rocks, said Gulick, “turned out to be completely separate from the story of the impact crater.” The tale is told by tiny crystals of zircon—small clocks within rocks—that record various chapters of Earth’s history.

Getting Past a Shocking Impact

As a melt solidifies, said Ross, zirconium, oxygen, and silicon atoms find each other to form zircon. Trace atoms of radioactive uranium easily swap places with zirconium while excluding lead (the product of uranium decay). By measuring both uranium and lead, geochronologists like Ross can calculate when lead began to accumulate in the crystal. In zircons of granitoids, this date typically records when the grain crystallized from the melt.

“The energy of Chicxulub is equivalent to 10 billion times the size of a World War II era nuclear bomb.”The drill core granites, however, harbor an incredible amount of damage caused by the impact’s shock wave. “The energy of Chicxulub is equivalent to 10 billion times the size of a World War II era nuclear bomb,” said Gulick. Highly damaged zircons from the peak ring yield the impact age, he said, but “once you go below those highest shocked grains, you more faithfully record the original age and not the impact age.”

The zircons that Ross and colleagues targeted lacked microstructures that indicate shock, said Maree McGregor, a planetary scientist at the University of New Brunswick who was not involved in this study. “A lot of people would overlook this material when they’re trying to understand impact cratering,” she said, because past studies focused heavily on the impact age and not the history of the target rocks.

Ross incrementally bored into 835 individual zircons with a laser, measuring age as a function of depth to differentiate age domains. “Being able to visualize the data and separate [them] in that way is…critical when you’re trying to establish different ages for different regional tectonic events,” said McGregor.

(a) The amalgamation of Pangea. Laurentia, in brown, lies to the north. Gondwana, shown in gray, lies to the south. Numerous terranes, shown in purple, are caught between the two continents. The Yucatán lies in the midst of these terranes, and a pink star indicates the Chicxulub impact site. CA = Colombian Andes; Coa = Coahuila; M = Merida terrane; Mx = Mixteca; Oax = Oaxaquia; SM = Southern Maya. (b) A simplified cross section through Laurentia, the Rheic Ocean, and subduction off the edge of the Yucatán crust. The Rheic Ocean must subduct below the Yucatán to create the arc magmatism responsible for the zircons Ross analyzed. Ga = giga-annum; Ma = mega-annum. Credit: Ross et al., 2021, https://doi.org/10.1130/B35831.1 Ancient Ocean, Volcanic Arc

In addition to the 334-million-year-old Carboniferous zircons, Ross found three older populations. Crystals with ages ranging from 1.3 billion to 1 billion years ago fingerprint the formation of the supercontinent Rodinia. After Rodinia fragmented, 550-million-year-old zircons place the Yucatán crust near the mountainous margins of the West African craton, which was part of the supercontinent Gondwana. Zircons between 500 million and 400 million years old document deformation as these crustal bits moved across the ancient Rheic Ocean toward Laurentia, which today corresponds to the North American continental core, Ross said.

As the Rheic oceanic slab subducted, fluids drove partial melting that powered a volcanic arc on the edge of the Yucatán crust, said Ross. Using trace element geochemistry from individual grains, she found that in spite of their tumultuous impact history, Carboniferous zircons preserve volcanic arc signatures.

This research, said coauthor and geochronologist Daniel Stockli, is very tedious micrometer-by-micrometer work. But ultimately, he said, these finely detailed data illuminate processes at the scale of plate tectonics.

—Alka Tripathy-Lang (@DrAlkaTrip), Science Writer

Magma Pockets Lie Stacked Beneath Juan de Fuca Ridge

EOS - Mon, 06/14/2021 - 12:48

Off the coast of the U.S. Pacific Northwest, at the Juan de Fuca Ridge, two tectonic plates are spreading apart at a speed of 56 kilometers per 1 million years. As they spread, periodic eruptions of molten rock give rise to new oceanic crust. Seismic images captured by Carbotte et al. now provide new insights into the dynamics of magma chambers that feed these eruptions.

The new research builds on earlier investigations into magma chambers that underlie the Juan de Fuca Ridge as well as other sites of seafloor spreading. Sites of fast and intermediate spreading are typically fed by a thin, narrow reservoir of molten magma—the axial melt lens—that extends along the ridge at an intermediate depth in the oceanic crust, but still well above the mantle.

Recent evidence suggests that some seafloor spreading sites around the world contain additional magma chambers beneath the axial melt lens. These additional chambers are stacked one above another in the “crystal mush zone,” an area of the actively forming oceanic crust that contains a low ratio of melted rock to crystallized rock.

Beneath the Axial Seamount portion of the Juan de Fuca Ridge (the site of an on-axis hot spot, which is a different tectonic setting compared with the rest of the ridge), a 2020 investigation showed evidence of stacked magma chambers in the crystal mush zone beneath the large magma reservoir that underlies this on-axis hot spot. Carbotte et al. applied multichannel seismic imaging data collected aboard the R/V Maurice Ewing and found geophysical evidence for these stacked chambers along normal portions of the ridge not influenced by the hot spot.

The new imaging data reveal several stacked magma chambers in the crystal mush zone at each of the surveyed sites. These chambers extend along the length of the ridge for about 1–8 kilometers, and the shallowest chambers lie about 100–1,200 meters below the axial melt lens.

These findings, combined with other geological and geophysical observations, suggest that these stacked chambers are short-lived and may arise during periods when the crystal mush zone undergoes compaction and magma is replenished from the mantle below. The chambers do not cool and crystallize in place, but instead are tapped and contribute magma to eruptions and other crust-building processes.

Further research could help confirm and clarify the role played by these stacked chambers in the dynamics of seafloor spreading. (Journal of Geophysical Research: Solid Earth, https://doi.org/10.1029/2020JB021434, 2021)

—Sarah Stanley, Science Writer

Deploying a Submarine Seismic Observatory in the Furious Fifties

EOS - Mon, 06/14/2021 - 12:48

On 23 May 1989, a violent earthquake rumbled through the remote underwater environs near Macquarie Island, violently shaking the Australian research station on the island and causing noticeable tremors as far away as Tasmania and the South Island of New Zealand. The seismic waves it generated rippled through and around the planet, circling the surface several times before dying away.

For 2 weeks, we sat in small, individual hotel rooms quarantining amid the COVID-19 pandemic and ruminating about our long-anticipated research voyage to the Macquarie Ridge Complex.Seismographs everywhere in the world captured the motion of these waves, and geoscientists immediately analyzed the recorded waveforms. The magnitude 8.2 strike-slip earthquake had rocked the Macquarie Ridge Complex (MRC), a sinuous underwater mountain chain extending southwest from the southern tip of New Zealand’s South Island. The earthquake’s great magnitude—it was the largest intraoceanic event of the 20th century—and its slip mechanism baffled the global seismological community: Strike-slip events of such magnitude typically occur only within thick continental crust, not thin oceanic crust.

Fast forward a few decades: For 2 weeks in late September and early October 2020, nine of us sat in small, individual rooms in a Hobart, Tasmania, hotel quarantining amid the COVID-19 pandemic and ruminating about our long-anticipated research voyage to the MRC. It was hard to imagine a more challenging place than the MRC—in terms of extreme topographic relief, heavy seas, high winds, and strong currents—to deploy ocean bottom seismometers (OBSs). But the promise of unexplored territory and the possibility of witnessing the early stages of a major tectonic process had us determined to carry out our expedition.

Where Plates Collide

Why is this location in the Southern Ocean, halfway between Tasmania and Antarctica, so special? The Macquarie archipelago, a string of tiny islands, islets, and rocks, only hints at the MRC below, which constitutes the boundary between the Australian and Pacific plates. Rising to 410 meters above sea level, Macquarie Island is the only place on Earth where a section of oceanic crust and mantle rock known as an ophiolite is exposed above the ocean basin in which it originally formed. The island, listed as a United Nations Educational, Scientific and Cultural Organization World Heritage site primarily because of its unique geology, is home to colonies of seabirds, penguins, and elephant and fur seals.

Yet beneath the island’s natural beauty lies the source of the most powerful submarine earthquakes in the world not associated with ongoing subduction, which raises questions of scientific and societal importance. Are we witnessing a new subduction zone forming at the MRC? Could future large earthquakes cause tsunamis and threaten coastal populations of nearby Australia and New Zealand as well as others around the Indian and Pacific Oceans?

Getting Underway at Last

As we set out from Hobart on our expedition, the science that awaited us helped overcome the doubts and thoughts of obstacles in our way. The work had to be done. Aside from the fundamental scientific questions and concerns for human safety that motivated the trip, it had taken a lot of effort to reach this place. After numerous grant applications, petitions, and copious paperwork, the Marine National Facility (MNF) had granted us ship time on Australia’s premier research vessel, R/V Investigator, and seven different organizations were backing us with financial and other support.

After a 6-month delay, the expedition set out for its destination above the Macquarie Ridge Complex. Credit: Hrvoje Tkalčić

The expedition was going to be anything but smooth sailing, a fact we gathered from the expression on the captain’s face and the serious demeanor of the more experienced sailors.COVID-19 slowed us down, delaying the voyage by 6 months, so we were eager to embark on the 94-meter-long, 10-story-tall Investigator. The nine scientists, students, and technicians from Australian National University’s Research School of Earth Sciences were about to forget their long days in quarantine and join the voyage’s chief scientist and a student from the University of Tasmania’s Institute for Marine and Antarctic Studies (IMAS).

Together, the 11 of us formed the science party of this voyage, a team severely reduced in number by pandemic protocols that prohibited double berthing and kept all non-Australia-based scientists, students, and technicians, as well as two Australian artists, at home. The 30 other people on board with the science team were part of the regular seagoing MNF support team and the ship’s crew.

The expedition was going to be anything but smooth sailing, a fact we gathered from the expression on the captain’s face and the serious demeanor of the more experienced sailors gathered on Investigator’s deck on the morning of 8 October.

The Furious Fifties

An old sailor’s adage states, “Below 40 degrees south, there is no law, and below 50 degrees south, there is no God.”

Spending a rough first night at sea amid the “Roaring Forties,” many of us contemplated how our days would look when we reached the “Furious Fifties.” The long-feared seas at these latitudes were named centuries ago, during the Age of Sail, when the first long-distance shipping routes were established. In fact, these winds shaped those routes.

Hot air that rises high into the troposphere at the equator sinks back toward Earth’s surface at about 30°S and 30°N latitude (forming Hadley cells) and then continues traveling poleward along the surface (Ferrel cells). The air traveling between 30° and 60° latitude gradually bends into westerly winds (flowing west to east) because of Earth’s rotation. These westerly winds are mighty in the Southern Hemisphere because, unlike in the Northern Hemisphere, no large continental masses block their passage around the globe.

These unfettered westerlies help develop the largest oceanic current on the planet, the Antarctic Circumpolar Current (ACC), which circulates clockwise around Antarctica. The ACC transports a flow of roughly 141 million cubic meters of water per second at average velocities of about 1 meter per second, and it encompasses the entire water column from sea surface to seafloor.

Our destination on this expedition, where the OBSs were to be painstakingly and, we hoped, precisely deployed to the seafloor over about 25,000 square kilometers, would put us right in the thick of the ACC.

Mapping the World’s Steepest Mountain Range

Much as high-resolution maps are required to ensure the safe deployment of landers on the Moon, Mars, and elsewhere in the solar system, detailed bathymetry would be crucial for selecting instrument deployment sites on the rugged seafloor of the MRC. Because the seafloor in this part of the world had not been mapped at high resolution, we devoted considerable time to “mowing the lawn” with multibeam sonar and subbottom profiling before deploying each of our 29 carefully prepared OBSs—some also equipped with hydrophones—to the abyss.

Mapping was most efficient parallel to the north-northeast–south-southwest oriented MRC, so we experienced constant winds and waves from westerly vectors that struck Investigator on its beam. The ship rolled continuously, but thanks to its modern autostabilizing system, which transfers ballast water in giant tanks deep in the bilge to counteract wave action, we were mostly safe from extreme rolls.

Nevertheless, for nearly the entire voyage, everything had to be lashed down securely. Unsecured chairs—some of them occupied—often slid across entire rooms, offices, labs, and lounges. In the mess, it was rare that we could walk a straight path between the buffet and the tables while carrying our daily bowl of soup. Solid sleep was impossible, and the occasional extreme rolls hurtled some sailors out of their bunks onto the floor.

The seismologists among us were impatient to deploy our first OBS to the seafloor, but they quickly realized that mapping the seafloor was a crucial phase of the deployment. From lower-resolution bathymetry acquired in the 1990s, we knew that the MRC sloped steeply from Macquarie Island to depths of about 5,500 meters on its eastern flank.

Fig. 1. Locations of ocean bottom seismometers are indicated on this new multibeam bathymetry map from voyage IN2020-V06. Dashed red lines indicate the Tasmanian Macquarie Island Nature Reserve–Marine Area (3-nautical-mile zone), and solid pink lines indicate the Commonwealth of Australia’s Macquarie Island Marine Park. Pale blue-gray coloration along the central MRC indicates areas not mapped. The inset shows the large map area outlined in red. MBES = multibeam echo sounding. Click image for larger version.

We planned to search for rare sediment patches on the underwater slopes to ensure that the OBSs had a smooth, relatively flat surface on which to land. This approach differs from deploying seismometers on land, where one usually looks for solid bedrock to which instruments can be secured. We would rely on the new, near-real-time seafloor maps in selecting OBS deployment sites that were ideally not far from the locations we initially mapped out.

However, the highly detailed bathymetric maps we produced revealed extraordinarily steep and hazardous terrain (Figure 1). The MRC is nearly 6,000 meters tall but only about 40 kilometers wide—the steepest underwater topography of that vertical scale on Earth. Indeed, if the MRC were on land, it would be the most extreme terrestrial mountain range on Earth, rising like a giant wall. For comparison, Earth’s steepest mountain above sea level is Denali in the Alaska Range, which stands 5,500 meters tall from base to peak and is 150 kilometers wide, almost 4 times wider than the MRC near Macquarie Island.

A Carefully Configured Array

Seismologists can work with single instruments or with configurations of multiple devices (or elements) called arrays. Each array element can be used individually, but the elements can also act together to detect and amplify weak signals. Informed by our previous deployments of instrumentation on land, we designed the MRC array to take advantage of the known benefits of certain array configurations.

The northern part of the array is classically X shaped, which will allow us to produce depth profiles of the layered subsurface structure beneath each instrument across the ridge using state-of-the-art seismological techniques. The southern segment of the array has a spiral-arm shape, an arrangement that enables efficient amplification of weak and noisy signals, which we knew would be an issue given the high noise level of the ocean.

Our array’s unique location and carefully designed shape will supplement the current volumetric sampling of Earth’s interior by existing seismic stations, which is patchy given that stations are concentrated mostly on land. It will also enable multidisciplinary research on several fronts.

The continuous recordings from our ocean bottom seismometers will illuminate phenomena occurring deep below the MRC as well as in the ocean above it.For example, in the field of neotectonics, the study of geologically recent events, detailed bathymetry and backscatter maps of the MRC are critical to marine geophysicists looking to untangle tectonic, structural, and geohazard puzzles of this little explored terrain. The most significant puzzle concerns the origin of two large underwater earthquakes that occurred nearby in 1989 and 2004. Why did they occur in intraplate regions, tens or hundreds of kilometers away from the ridge? Do they indicate deformation due to a young plate boundary within the greater Australia plate? The ability of future earthquakes and potential submarine mass wasting to generate tsunamis poses other questions: Would these hazards present threats to Australia, New Zealand, and other countries? Data from the MRC observatory will help address these important questions.

The continuous recordings from our OBSs will also illuminate phenomena occurring deep below the MRC as well as in the ocean above it. The spiral-arm array will act like a giant telescope aimed at Earth’s center, adding to the currently sparse seismic coverage of the lowermost mantle and core. It will also add to our understanding of many “blue Earth” phenomena, from ambient marine noise and oceanic storms to glacial dynamics and whale migration.

Dealing with Difficulties

The weather was often merciless during our instrument deployments. We faced gale-strength winds and commensurate waves that forced us to heave to or shelter in the lee of Macquarie Island for roughly 40% of our time in the study area. (Heaving to is a ship’s primary heavy weather defense strategy at sea; it involves steaming slowly ahead directly into wind and waves.)

Macquarie Island presents a natural wall to the westerly winds and accompanying heavy seas, a relief for both voyagers and wildlife. Sheltering along the eastern side of the island, some of the crew spotted multiple species of whales, seals, and penguins.

As we proceeded, observations from our new seafloor maps necessitated that we modify our planned configuration of the spiral arms and other parts of the MRC array. We translated and rotated the array toward the east side of the ridge, where the maps revealed more favorable sites for deployment.

However, many sites still presented relatively small target areas in the form of small terraces less than a kilometer across. Aiming for these targets was a logistical feat, considering the water depths exceeding 5,500 meters, our position amid the strongest ocean current on Earth, and unpredictable effects of eddies and jets produced as the ACC collides head-on with the MRC.

Small target areas, deep water, strong currents and winds, and high swells made accurate placement of the seismometers difficult. Credit: Hrvoje Tkalčić

To place the OBSs accurately, we first attempted to slowly lower instruments on a wire before releasing them 50–100 meters above the seafloor. However, technical challenges with release mechanisms soon forced us to abandon this method, and we eventually deployed most instruments by letting them free-fall from the sea surface off the side of the ship. This approach presented its own logistical challenge, as we had accurate measurements of the currents in only the upper few hundred meters of the water column.

In the end, despite prevailing winds of 30–40 knots, gusts exceeding 60 knots, and current-driven drifts in all directions of 100–4,900 meters, we found sufficient windows of opportunity to successfully deploy 27 of 29 OBSs at depths from 520 to 5,517 meters. Although we ran out of time to complete mapping the shallow crest of the MRC north, west, and south of Macquarie Island, we departed the study area on 30 October 2020 with high hopes.

Earlier this year, we obtained additional support to install five seismographs on Macquarie Island itself that will complement the OBS array. Having both an onshore and offshore arrangement of instruments operating simultaneously is the best way of achieving our scientific goals. The land seismographs tend to record clearer signals, whereas the OBSs provide the spatial coverage necessary to image structure on a broader scale and more accurately locate earthquakes.

Bringing the Data Home

The OBSs are equipped with acoustic release mechanisms and buoyancy to enable their return to the surface in November 2021, when we’re scheduled to retrieve them and their year’s worth of data and to complete our mapping of the MRC crest from New Zealand’s R/V Tangaroa. In the meantime, the incommunicado OBSs will listen to and record ground motion from local, regional, and distant earthquakes and other phenomena.

Despite the difficulties, the OBS array is now in place and collecting data, and it has been augmented by a new land-based seismometer array. Credit: Millard Coffin

With the data in hand starting late this year, we’ll throw every seismological and marine geophysical method we can at this place. The recordings will be used to image crustal, mantle, and core structure beneath Macquarie Island and the MRC and will enable better understanding of seismic wave propagation through these layers.

Closer to the seafloor, new multibeam bathymetry/backscatter, subbottom profiler, gravity, and magnetics data will advance understanding of the neotectonics of the MRC. These data will offer vastly improved views of seafloor habitats, thus contributing to better environmental protection and biodiversity conservation in the Tasmanian Macquarie Island Nature Reserve–Marine Area that surrounds Macquarie Island and the Commonwealth of Australia’s Macquarie Island Marine Park east of Macquarie Island and the MRC.

Results from this instrument deployment will also offer insights into physical mechanisms that generate large submarine earthquakes, crustal deformation, and tectonic strain partitioning at convergent and obliquely convergent plate boundaries. We will compare observed seismic waveforms with those predicted from numerical simulations to construct a more accurate image of the subsurface structure. If we discover, for example, that local smaller- or medium-sized earthquakes recorded during the experiment have significant dip-slip components (i.e., displacement is mostly vertical), it’s possible that future large earthquakes could have similar mechanisms, which increases the risk that they might generate tsunamis. This knowledge should provide more accurate assessments of earthquake and tsunami potential in the region, which we hope will benefit at-risk communities along Pacific and Indian Ocean coastlines.

Scientifically, the most exciting payoff of this project may be that it could help us add missing pieces to one of the biggest puzzles in plate tectonics: how subduction begins.Scientifically, the most exciting payoff of this project may be that it could help us add missing pieces to one of the biggest puzzles in plate tectonics: how subduction begins. Researchers have grappled with this question for decades, probing active and extinct subduction zones around the world for hints, though the picture remains murky.

Some of the strongest evidence of early-stage, or incipient, subduction comes from the Puysegur Ridge and Trench at the northern end of the MRC, where the distribution of small earthquakes at depths less than 50 kilometers and the presence of a possible subduction-related volcano (Solander Island) suggest that the Australian plate is descending beneath the Pacific plate. Incipient subduction has also been proposed near the Hjort Ridge and Trench at the southern end of the MRC. Lower angles of oblique plate convergence and a lack of trenches characterize the MRC between Puysegur and Hjort, so it is unclear whether incipient subduction is occurring along the entire MRC.

Testing this hypothesis is impossible because of a lack of adequate earthquake data. The current study, involving a large array of stations capable of detecting even extremely small seismic events, is crucial in helping to answer this fundamental question.

Acknowledgments

We thank the Australian Research Council, which awarded us a Discovery Project grant (DP2001018540). We have additional support from ANSIR Research Facilities for Earth Sounding and the U.K.’s Natural Environment Research Council (grant NE/T000082/1) and in-kind support from Australian National University, the University of Cambridge, the University of Tasmania, and the California Institute of Technology. Geoscience Australia; the Australian Antarctic Division of the Department of Agriculture, Water and the Environment; and the Tasmania Parks and Wildlife Service provided logistical support to install five seismographs on Macquarie Island commencing in April 2021. Unprocessed seismological data from this work will be accessible through the ANSIR/AuScope data management system AusPass 2 years after the planned late 2021 completion of the experimental component. Marine acoustics, gravity, and magnetics data, both raw and processed, will be deposited and stored in publicly accessible databases, including those of CSIRO MNF, the IMAS data portal, Geoscience Australia, and the NOAA National Centers for Environmental Information.

Author Information

Hrvoje Tkalčić (hrvoje.tkalcic@anu.edu.au) and Caroline Eakin, Australian National University, Canberra; Millard F. Coffin, University of Tasmania, Hobart, Australia; Nicholas Rawlinson, University of Cambridge, U.K.; and Joann Stock, California Institute of Technology, Pasadena

Observations from Space and Ground Reveal Clues About Lightning

EOS - Fri, 06/11/2021 - 12:47

Capturing the fleeting nature and order of lightning and energy pulses has been a goal of many studies over the past 3 decades. Although bolts of white lightning and colorful elves (short for emissions of light and very low frequency perturbations due to electromagnetic pulse sources) can be seen with the naked eye, the sheer speed and sequence of events can make differentiating flashes difficult.

In particular, researchers want to understand the timing of intracloud lightning, elves, terrestrial gamma ray flashes (TGFs), and energetic in-cloud pulses. Do all of these energy pulses occur at the same time, or are there leaders, or triggers, for lightning events?

This video is related to new research that uncovers the timing and triggering of high-energy lightning events in the sky, known as terrestrial gamma ray flashes and elves. Credit: Birkeland Centre for Space Science and MountVisual

In a new study, Østgaard et al. observed lightning east of Puerto Rico. They used optical monitoring along with gamma ray and radio frequency monitoring from the ground to determine the sequence of an elve produced by electromagnetic waves from an energetic in-cloud pulse, the optical pulse from the hot leader, and a terrestrial gamma ray flash.

The Atmosphere–Space Interactions Monitor (ASIM) is mounted on the International Space Station and includes a gamma ray sensor along with a multispectral imaging array. The optical measurements captured the lightning and ultraviolet measurements of the elves. The gamma ray instruments measured TGFs. In Puerto Rico, the researchers measured low-frequency radio measurements from lightning.

The team found that by using this combined monitoring technique, they could observe high-resolution details about the timing and optical signatures of TGFs, lightning, and elves. They found that the TGF and the first elve were produced by a positive intracloud lightning flash and an energetic in-cloud pulse, respectively. Just 456 milliseconds later, a second elve was produced by a negative cloud-to-ground lightning flash about 300 kilometers south of the first elve.

This combination of observations is unique and unprecedented. The detailed observations suggest that coordinated monitoring is the future method for lightning and thunderstorm research efforts. (Journal of Geophysical Research: Atmospheres, https://doi.org/10.1029/2020JD033921, 2021)

—Sarah Derouin, Science Writer

Below Aging U.S. Dams, a Potential Toxic Calamity

EOS - Fri, 06/11/2021 - 12:45

This article was originally published on Undark. Read the original article.

1 June 2021 by James Dinneen and Alexander Kennedy.

A multimedia version of this story, with rich maps and data, is available here.

On May 19, 2020, a group of engineers and emergency officials gathered at a fire station in Edenville, Michigan to decide what to do about the Edenville Dam, a 97-year-old hydroelectric structure about a mile upstream on the Tittabawassee River. Over the preceding two days, heavy rains had swelled the river, filling the reservoir to its brim and overwhelming the dam’s spillway. The group was just about to discuss next steps when the radios went off, recalled Roger Dufresne, Edenville Township’s longtime fire chief. “That’s when the dam broke.”

“Up at the dam, Edenville residents watched as a portion of the eastern embankment liquified. Muddy water gushed through the breach. Over the next few minutes, that water became a torrent, snapping trees and telephone poles as it rushed past town, nearly submerging entire houses further downstream.

About 10 miles and two hours later, the flood wave bowled into a second aging dam, damaging its spillway, overtopping, and then breaching the embankment.

Al Taylor, then chief of the hazardous waste section within the state’s Department of Environment, Great Lakes, and Energy, was following the situation closely as the surge swept 10 miles further downstream into the larger city of Midland, where it caused a Dow Chemical Company plant flanking the river to shut down, and threatened to mix with the plant’s containment ponds. Taylor, who retired at the end of January, worried that contamination from the ponds would spill into the river. But that was just the first of his concerns.

In prior decades, Dow had dumped dioxin-laden waste from the plant directly into the river, contaminating more than 50 miles of sediment downstream — through the Tittabawassee, the Saginaw River, and the Saginaw Bay — with carcinogenic material. The contamination was so severe that the U.S. Environmental Protection Agency stepped in, and since 2012, worked with Dow to map and cap the contaminated sediments. In designing the cleanup, engineers accounted for the river’s frequent flooding, Taylor knew, but nobody had planned for the specific impacts of flooding caused by a dam failure.

An Undark investigation has identified 81 other dams in 24 states, that, if they were to fail, could flood a major toxic waste site and potentially spread contaminated material into surrounding communities.While the dramatic breach of the Edenville Dam captured national headlines, an Undark investigation has identified 81 other dams in 24 states, that, if they were to fail, could flood a major toxic waste site and potentially spread contaminated material into surrounding communities.

In interviews with dam safety, environmental, and emergency officials, Undark also found that, as in Michigan, the risks these dams pose to toxic waste sites are largely unrecognized by any agency, leaving communities across the country vulnerable to the same kind of low-probability, but high-consequence disaster that played out in Midland.

After the flooding subsided, Dow and state officials inspected the chemical plant’s containment ponds and found that, though one of the brine ponds containing contaminated sediment had been breached, there was no evidence of significant toxic release. Preliminary sediment samples taken downstream did not find any new contamination. The plant’s and the cleanup’s engineering, it seemed, had done its job.

“Dow has well-developed, comprehensive emergency preparedness plans in place at our sites around the world,” Kyle Bandlow, Dow’s corporate media relations director, wrote in an email to Undark. “The breadth and depth of these plans — and our ability to quickly mobilize them — enabled the safety of our colleagues and our community during this historic flood event.”

But things could have gone differently — if not in Midland, then somewhere else with a toxic waste site downstream of an aging dam less prepared for a flood. “As a lesson learned from this,” Taylor said, “we need to be aware of that possibility.”

In the United States, there are more than 90,000 dams providing flood control, power generation, water supplies, and other critical services, according to the National Inventory of Dams database maintained by the U.S. Army Corps of Engineers, which includes both behemoths like the Hoover Dam and small dams holding back irrigation ponds. Structural and safety oversight of these dams falls under a loose and, critics say, inadequate patchwork of state and federal remit.

A 2019 report from the Congressional Research Service (CRS), the nonpartisan research arm of the U.S. Congress, found roughly 3 percent of the nation’s dams are federally owned, including some of the country’s largest, with the rest owned and operated by public utilities, state and local governments, and private owners. The report estimated that half of all dams were over 50 years old, including many that were built to now obsolete safety standards. About 15 percent of dams in the Army Corps database lacked data on when they were built.

In addition to information on age and design, the Army Corps database includes a “hazard potential” used to describe the possible impact of a dam failure to life and property. In 2019, roughly 17 percent, or 15,629 dams, had a high hazard potential rating, indicating that a loss of human life was likely in the event of a dam failure. The number of high-hazard dams has increased in recent years due to new downstream development.

According to the CRS report, more than 2,300 dams in the database were both high-hazard and in “poor” or “unsatisfactory” condition during their most recent inspection. Due to security concerns that arose after the September 11 terrorist attacks, the report did not name these dams, though an investigation by The Associated Press in 2019 identified nearly 1,700 of them.

For all that is known about America’s aging dam infrastructure, however, little information exists about the particular hazards dams pose to toxic waste sites downstream.For all that is known about America’s aging dam infrastructure, however, little information exists about the particular hazards dams pose to toxic waste sites downstream. This is why regulators knew about problems with the Edenville Dam and knew about the Dow cleanup, but had not connected the dots.

To identify dams that might pose the most serious risk to toxic waste sites, Undark searched for dams in the national database that are both high-hazard and older than 50 years, the age after which many dams require renovations. To narrow our search, we selected dams that sit 6 or fewer miles away from and appear in satellite images to be upstream of an EPA-listed toxic waste site. Experts say that many dams would flood much farther than 6 miles.

We then filed requests under state and federal freedom of information laws with various agencies, including the Federal Energy Regulatory Commission, seeking dam inspection reports and the Emergency Action Plans (EAPs) that dam owners are typically required to prepare and maintain. Among other things, these plans usually include inundation maps, which model the area that would likely be flooded in a dam failure scenario.

The inputs for these models vary by state, and while some inundation maps were highly sophisticated, involving contingencies for weather and other variables, others were less so. In one Emergency Action Plan for a dam in Tennessee, the inundation zone was simply hand-drawn on a map with a highlighter (see above image). But whatever their quality, the maps represent dam officials’ best estimate of where large volumes of water will flow if a dam fails.

Undark successfully obtained inundation modeling information for 153 of the 259 dams identified in our search. For 63 dams, state and federal officials declined to provide or otherwise redacted pertinent inundation information, citing security concerns. For 31 dams, agencies said they did not have inundation maps prepared, or provided maps that were illegible or did not extend to the site. Despite improvement in recent years, about 19 percent of high-hazard dams still lacked plans as of 2018, according to the American Society of Civil Engineers.

With those maps, we then looked to see if any EPA-listed toxic waste sites fell within the delineated inundation areas. Because the precise boundaries of each toxic waste site are not consistently available, we followed the methodology of a 2019 Government Accountability Office analysis of flood risks to contaminated sites on the EPA’s National Priorities List — more commonly known as Superfund sites — which used a 0.2-mile radius around the coordinates listed by the EPA for each location.

For a number of dams for which we were not able to obtain inundation maps to review ourselves, dam regulators or owners confirmed that coordinates we provided for the toxic waste site fell within 0.2 miles of the dam’s inundation zone.

We focused our search on the nation’s highest priority cleanup sites, as indicated by a designation of Superfund (for non-operating sites) or RCRA (Resource Conservation and Recovery Act of 1976, for operating sites). We considered 5,695 of these sites, including both current and former sites. Types and levels of contamination vary widely across sites, as would the impact of any flooding.

“There are many situations across the country then with these dams where they don’t meet the current safety standards.”Using this methodology, we identified at least 81 aging high-hazard dams that could flood portions of at least one toxic waste site if they failed, potentially spreading contaminated material into surrounding communities and exposing hundreds or thousands of people — in the case of very large dams, many more — to health hazards atop significant environmental impacts. At least six of the dams identified were in “poor” or “unsafe” condition during their most recent inspection.

In many instances, state and local agencies responsible for dam safety and toxic waste have not accounted for this kind of cascading disaster, and so remain largely unprepared.

Undark shared this analysis with engineering and dam safety experts, who verified the methodology. Several suggested that the true number of dams that could flood toxic waste sites if they were to fail is almost certainly far greater, but because no agency tracks this particular hazard, the actual number remains unknown.

“There are many situations across the country then with these dams where they don’t meet the current safety standards …” said Mark Ogden, a civil engineer who reviewed the American Society of Civil Engineer’s 2021 Infrastructure Report Card section on dams, which gave U.S. dams a “D” grade. “And the fact that there could be these hazardous sites as part of that, just increases that concern and what the consequences of a failure might be.”

Though impacts would vary widely, environmental scientists and toxicologists interviewed by Undark suggested that a sudden, intense flood caused by a dam failure could spread contaminants from hazardous waste sites into surrounding communities. Even sites designed to withstand flooding might be impacted if the debris carried in floodwater managed to scour and erode protective caps, potentially releasing toxic material into the water, explained Rick Rediske, a toxicologist at Grand Valley State University in Michigan. In Houston in 2017, flooding from Hurricane Harvey eroded a temporary protective cap at the San Jacinto River Waste Pits Superfund site, exposing dioxins and other toxic substances.

Water could then move contaminants around the site and redeposit them anywhere in the floodplain, exposing people and ecosystems to health hazards, said Jacob Carter, a research scientist at the Union of Concerned Scientists, who formerly studied flooding hazards to contaminated sites for the EPA. Carter also pointed out that communities living nearest to toxic waste sites and so most vulnerable to these events tend to be low income and communities of color.

It’s possible that any toxic material would be diluted by the flood and new clean sediment, said Allen Burton, director of the Institute for Global Change Biology at the University of Michigan. But this, he emphasized, would be a best-case scenario.

“You have no way of predicting, really, how much of the bad stuff moved, how far it moved, how far it got out into the floodplain, what the concentrations are.”Generally, when there’s a massive flood like the one in Michigan, “it just moves the sediment everywhere downstream,” said Burton. “You have no way of predicting, really, how much of the bad stuff moved, how far it moved, how far it got out into the floodplain, what the concentrations are.” And regulated waste sites are just one source of potential contamination in a dam breach scenario, said Burton. Sediment behind dams is itself often contaminated after years of collecting whatever went into the river upstream.

Contamination can also come from more mundane sources in the floodplain, like wastewater treatment plants or the oil canisters in people’s basements that get swept into floodwaters, said Burton. “The fish downstream,” he quipped, “don’t care if contaminants came from your garage or Dow Chemical.”

Undark’s investigation found that state and local governments often have not prepared for the flooding that could occur at toxic waste sites in the event of a dam failure.

Emporia Foundry Incorporated, a federally-regulated hazardous waste site in Greensville County, Virginia, provides a representative example. It falls within the inundation zone of the 113-year-old Emporia Dam, which is a hydroelectric dam partially owned by the city and located on the Meherrin River, just over one mile west of the foundry site.”

The foundry, which once manufactured manhole covers and drain grates, includes a landfill packed with byproducts containing lead, arsenic, and cadmium. The landfill was capped in 1984, and in 2014, a second cap was added nearer to the river as a buffer against flooding. As in Midland, cleanup engineers accounted for flooding within the 100-year floodplain, but according to a spokesperson from the Virginia Department of Environmental Quality, they did not account for flooding from a dam failure.

The Emporia Dam inundation map shows that if the dam were to fail during a severe storm, the entire foundry site could be flooded, potentially disintegrating the cap and spreading contaminants across the floodplain. However, the site would not be flooded in the event of a “sunny day” failure.

More than 3,000 people live within a mile of the Emporia Foundry site, around 75 percent of whom are Black, according to EPA and 2010 census data.

Wendy C. Howard Cooper, director of Virginia’s dam safety program, explained that her program’s mandate is to define a dam’s inundation zone and inform local emergency managers of any immediate risks to human life and property — not to identify toxic waste sites and analyze what might happen to them during a flood. “That would be a rabbit hole that no one could regulate,” Howard Cooper said. She added that local governments should be familiar with both the dams and the contaminated sites inside their borders and should have proper emergency procedures in place.

This turned out not to be true in Greensville County, where the program coordinator for emergency services, J. Reggie Owens, told Undark he was unaware of the potential for the foundry site to flood if the Emporia Dam were to fail. The site is “not even in the floodplain,” he said. “It’s never been put on my radar by DEQ or anyone else.”

A similar pattern emerged in other states. In Rhode Island, for instance, our search identified eight dams. One of these, the 138-year-old Forestdale Pond Dam, was considered “unsafe” during its most recent inspection.

Located in the town of North Smithfield, the dam is immediately adjacent to the Stamina Mills Superfund site, which once housed a textile mill that spilled the toxic solvent trichloroethylene into the soil. Another area on the site was used as a landfill for polycyclic aromatic hydrocarbons, sulfuric acid, soda ash, wool oil, plasticizers, and pesticides.

A few years after trichloroethylene was detected in groundwater in 1979, the site received a Superfund designation from the EPA. According to the federal agency, construction for the site cleanup — which involved removing the contaminated soil from the landfill and installing a groundwater treatment system — was completed in 2000 and accounted for a 100-year flood, but it did not account for flooding due to a dam failure.

According to EPA and census data, more than 2,500 people lived within a mile of Stamina Mills as of 2010, and Forestdale Pond is not the only dam that could pose a threat.

In fact, the site sits within the inundation zones of two other high hazard dams identified by Undark. A failure of either of these dams on the Slatersville Reservoir could cause a domino effect of dam failures downstream, according to Rhode Island dam safety reports, all leading to flooding at Stamina Mills.

When asked to comment on possible flood risks to the Superfund site, the EPA responded that the only remaining remedy at Stamina Mills, the groundwater treatment system, would not be affected if Forestdale Pond Dam were to fail. EPA made no reference to the larger Slatersville Reservoir dams less than two miles upstream.

Spokespersons at the Rhode Island dam safety office and the state office responsible for hazardous waste had not considered that a dam failure could flood any of the sites identified by Undark, including Stamina Mills.

By building engineered structures or taking other resiliency measures, the most hazardous waste sites can be designed to withstand flooding, explained Carter, who recently co-authored a report on climate change and coastal flooding hazards to Superfund sites. But in order to prepare for floods, Carter said, flooding hazards have to be recognized first, whether they come from rising seas, increasing storm surge, or, as in these cases, dams.

“They could have looked at that dam and said, ‘Oh, it gets a D minus for infrastructure. This thing could break.’”“They could have looked at that dam and said, ‘Oh, it gets a D minus for infrastructure. This thing could break,’” said Burton, referring to the Edenville Dam. “So in the future, it would be smart of EPA to require the principal party who’s responsible for the cleanup to look at the situation to see if it actually could happen.”

One step that could make that process much easier is for dam inundation zones to be regularly included in FEMA’s publicly available flood risk maps, which show the 100-year floodplain and other flood risks to communities, said Ogden. A lack of available data on dam inundations — sometimes the result of security concerns — presents a major obstacle, said a FEMA spokesperson, but plotting inundation zones on commonly-used flood risk maps would ensure communities and agencies are aware of and can respond to dam hazards.

Some states, including Rhode Island, have already made inundation zones, Emergency Action Plans, and inspection reports for the dams they regulate publicly available online. In South Carolina, following a 2015 event when heavy rains caused 50 dams to fail, dam inundations for the most hazardous state-regulated dams were made publicly available. Though no state agency tracks hazardous waste sites within dam inundation zones, Undark was able to identify three dams in South Carolina which could flood a hazardous waste site in the state using this resource.

In California, inundation zones for the state’s most hazardous dams were made available following a 2017 dam failure scare at the Oroville Dam, the tallest dam in the country, which led to the evacuation of more than 180,000 people.

Using this resource, Undark identified four dams which would flood at least one hazardous waste site in California. These included the Oroville Dam, which could flood at least one current and one former Superfund site if it were to fail.

According to the EPA, neither of those sites downstream of the Oroville Dam had considered the possibility of flooding due to dam failure prior to the failure scare. Even so, commented EPA, due to the “extraordinary volume of water” that would flood the sites if the Oroville Dam were to fail, “it is not feasible to alter the existing landfills and groundwater remedy infrastructure to protect against the potential failure of the Oroville Dam.”

In order to fix the nation’s dams, the first step is to spread awareness about the importance of dams and the hazards they pose to people and property.In order to fix the nation’s dams, the first step is to spread awareness about the importance of dams and the hazards they pose to people and property, said Farshid Vahedifard, a civil engineer at Mississippi State University who co-authored a recent letter in Science on the need to proactively address problematic dams. “The second thing is definitely we need to invest more.”

According to the Association of State Dam Safety Officials, the fixes necessary to rehabilitate all the nation’s dams would cost more than $64 billion; rehabilitating only the high hazard dams would cost around $22 billion. Meanwhile, the $10 million appropriated by Congress in 2020 for FEMA’s high hazard dam rehabilitation program are “kind of a drop in the bucket for what’s really needed,” said Ogden.

Indeed, state dam safety programs report a chronic lack of funds for dam safety projects, both from public sources and from private dam owners unable or unwilling to pay for expensive repairs. In Michigan, both dams that failed were operated by a company called Boyce Hydro, which received years of warnings from dam safety regulators that there were deficiencies.

Lee Mueller, Boyce Hydro’s co-manager, told Undark that the company made numerous improvements to the dams over the years. After losing revenue when the Federal Energy Regulatory Commission (FERC) revoked the company’s hydroelectric permit, however, it was unable to fund repairs that might have prevented the dam failures.

“Regarding the Edenville Dam breach, the subject of the State of Michigan’s governance and political policy failures and the insouciance of the environmental regulatory agencies are the subject of on-going litigation and will be more thoroughly detailed in the course of those legal proceedings,” Mueller wrote in an email.

“The state of Michigan knew about this,” said Dufresne, the Edenville fire chief. State regulators, he says, should have insisted that the company pay for the badly needed repairs. “They needed to push him,” said Dufresne, referring to Mueller. More than half of all dams in the U.S. are privately owned.

Without the funding to match the problem, members of the state dam safety community have looked to non-typical sources of funding, says Bill McCormick, chief of the Colorado dam safety program. In Eastern Oregon for example, the 90-year old Wallowa Lake Dam — which Undark found would flood the former Joseph Forest Products Superfund site if it were to fail — was slated last year for a $16 million renovation to repair its deteriorating spillway and add facilities for fish to pass through. But the plans have stalled since the Covid-19 pandemic has reduced Oregon’s lottery revenues, which were funding most of the project.

“If we start getting much bigger storms, then that itself will lead to a higher probability of overtopping and dam failure.”The challenges facing U.S. dams are also exacerbated by climate change, say dam safety experts, with more frequent extreme weather events and more intense flooding expected in parts of the country adding new stresses to old designs. “If we start getting much bigger storms, then that itself will lead to a higher probability of overtopping and dam failure,” said Upmanu Lall, director of the Columbia Water Center at Columbia University and co-author of a recent report on potential economic impacts of climate-induced dam failure, which considered how the presence of hazardous waste sites might further amplify damages. The report also outlines how in addition to more extreme weather, factors like changes in land use, sediment buildup, and changing frequencies of wet-dry and freeze-thaw cycles all can contribute to a higher probability of dam failure.

Several state dam safety programs contacted by Undark said they are planning for climate change-related impacts to dam infrastructure, though according McCormick, the Colorado dam safety chief, his state is the only one with dam safety rules which explicitly account for climate change. New rules that took effect in January require dam designs “to account for expected increases in temperature and associated increases in atmospheric moisture.”

“We were the first state to take that step, but I wouldn’t be surprised if others follow that lead,” McCormick said.

No deaths were reported in the Michigan flooding, but more than 10,000 residents had to be evacuated from their homes and the disaster likely caused more than $200 million in damage to surrounding property, according to a report from the office of Michigan Gov. Gretchen Whitmer. Restoring the empty reservoirs, as well as rebuilding the two dams, could cost upwards of $300 million, according to the Four Lakes Task Force, an organization that had been poised to buy the dams just before they failed.

In contrast, the Four Lakes Task Force, which now owns the dams, planned to spend about $35 million to acquire and repair those dams and an additional two dams prior to the breach. Boyce Hydro declared bankruptcy in July and now faces numerous lawsuits related to the flooding. FERC is coordinating with officials in Michigan on investigations into the dam failures, and has fined Boyce Hydro $15 million for failing to act on federal orders following the incident.

Dufresne, the Edenville fire chief, watched for years as political and financial challenges prevented the dams on the Tittabawassee from getting fixed. His advice for any other community dealing with a problematic dam: Call your state representatives, tell them, “Hey you need to investigate this.”

By August, life in Midland County was slowly getting back to normal. “Some of the people started putting their houses back together. The businesses are trying to figure out what to do next,” said Jerry Cole, the fire chief of Jerome Township, located south of Edenville.

At the Edenville Dam, neat houses looked out over a wide basin of sand-streaked mud where the impounded lake used to be. Near the bottom, where the river was still flowing through the gap in the fractured dam, a group of teenagers lounged on inner tubes, splashing around.

“It just amazes me that this actually happened here,” said Dufresne.

James Dinneen is a science and environmental journalist from Colorado, based in New York.

Alexander Kennedy is a software engineer specializing in data visualization.

This article was originally published on Undark. Read the original article.

Particles at the Ocean Surface and Seafloor Aren’t So Different

EOS - Thu, 06/10/2021 - 14:48

Although scientists often assume that random variations in scientific data fit symmetrical, bell-shaped normal distributions, nature isn’t always so tidy. In some cases, a skewed distribution, like the log-normal probability distribution, provides a better fit. Researchers previously found that primary production by ocean phytoplankton and carbon export via particles sinking from the surface are consistent with log-normal distributions.

In a new study, Cael et al. discovered that fluxes at the seafloor also fit log-normal distributions. The team analyzed data from deep-sea sediment traps at six different sites, representing diverse nutrient and oxygen statuses. They found that the log-normal distribution didn’t just fit organic carbon flux; it provided a simple scaling relationship for calcium carbonate and opal fluxes as well.

Uncovering the log-normal distribution enabled the researchers to tackle a longstanding question: Do nutrients reach the benthos—life at the seafloor—via irregular pulses or a constant rain of particles? The team examined the shape of the distribution and found that 29% of the highest measurements accounted for 71% of the organic carbon flux at the seafloor, which is less imbalanced than the 80:20 benchmark specified by the Pareto principle. Thus, although high-flux pulses do likely provide nutrients to the benthos, they aren’t the dominant source.

The findings will provide a simple way for researchers to explore additional links between net primary production at the ocean surface and deep-sea flux. (Geophysical Research Letters, https://doi.org/10.1029/2021GL092895, 2021)

—Jack Lee, Science Writer

“Earth Cousins” Are New Targets for Planetary Materials Research

EOS - Thu, 06/10/2021 - 14:46

Are the processes that generate planetary habitability in our solar system common or rare elsewhere? Answering this fundamental question poses an enormous challenge.

For example, observing Earth-analogue exoplanets—that is, Earth-sized planets orbiting within the habitable zone of their host stars—is difficult today and will remain so even with the next-generation James Webb Space Telescope (JWST) and large-aperture ground-based telescopes. In coming years, it will be much easier to gather data on—and to test hypotheses about the processes that generate and sustain habitability using—“Earth cousins.” These small-radius exoplanets lack solar system analogues but are more accessible to observation because they are slightly bigger or slightly hotter than Earth.

Here we discuss four classes of exoplanets and the investigations of planetary materials that are needed to understand them (Figure 1). Such efforts will help us better understand planets in general and Earth-like worlds in particular.

Fig. 1. Shown here are four common exoplanet classes that are relatively easy to characterize using observations from existing telescopes (or telescopes that will be deployed soon) and that have no solar system analogue. Hypothetical cross sections for each planet type show interfaces that can be investigated using new laboratory and numerical experiments. CO2 = carbon dioxide, Fe = iron, H2O = water, Na = sodium. What’s in the Air?

Atmospheres are now routinely characterized for Jupiter-sized exoplanets. And scientists are acquiring constraints for various atmospheric properties of abundant smaller worlds.On exoplanets, the observable is the atmosphere. Atmospheres are now routinely characterized for Jupiter-sized exoplanets. And scientists are acquiring constraints for various atmospheric properties of smaller worlds (those with a radius R less than 3.5 Earth radii R⨁), which are very abundant [e.g., Benneke et al., 2019; Kreidberg et al., 2019]. Soon, observatories applying existing methods and new techniques such as high-resolution cross-correlation spectroscopy will reveal even more information.

For these smaller worlds, as for Earth, a key to understanding atmospheric composition is understanding exchanges between the planet’s atmosphere and interior during planet formation and evolution. This exchange often occurs at interfaces (i.e., surfaces) between volatile atmospheres and condensed (liquid or solid) silicate materials. For many small exoplanets, these interfaces exhibit pressure-temperature-composition (PTX) regimes very different from Earth’s and that have been little explored in laboratory and numerical experiments. To use exoplanet data to interpret the origin and evolution of these strange new worlds, we need new experiments exploring the relevant planetary materials and conditions.

Studying Earth cousin exoplanets can help us probe the delivery and distribution of life-essential volatile species—chemical elements and compounds like water vapor and carbon-containing molecules, for example, that form atmospheres and oceans, regulate climate, and (on Earth) make up the biosphere. Measuring abundances of these volatiles on cousin worlds that orbit closer to their star than the habitable zone is relatively easy to do. These measurements are fundamental to understanding habitability because volatile species abundances on Earth cousin exoplanets will help us understand volatile delivery and loss processes operating within habitable zones.

For example, rocky planets now within habitable zones around red dwarf stars must have spent more than 100 million years earlier in their existence under conditions exceeding the runaway greenhouse limit, suggesting surface temperatures hot enough to melt silicate rock into a magma ocean. So whether these worlds are habitable today depends on the amount of life-essential volatile elements supplied from sources farther from the star [e.g., Tian and Ida, 2015], as well as on how well these elements are retained during and after the magma ocean phase.

Different types of Earth cousin exoplanets offer natural solutions that can ease volatile detection.Volatiles constitute a small fraction of a rocky planet’s mass, and quantifying their abundance is inherently hard. However, different types of Earth cousin exoplanets offer natural solutions that can ease volatile detection. For example, on planets known as sub-Neptunes, the spectroscopic fingerprint of volatiles could be easier to detect because of their mixing with lower–molecular weight atmospheric species like hydrogen and helium. These lightweight species contribute to more puffed-up (expanded) and thus more detectable atmospheres. Hot, rocky exoplanets could “bake out” volatiles from their interiors while also heating and puffing up the atmosphere, which would make spectral features more visible. Disintegrating rocky planets may disperse their volatiles into large, and therefore more observable, comet-like tails.

Let’s look at each of these examples further.

Unexpected Sub-Neptunes

About 1,000 sub-Neptune exoplanets (radius of 1.6–3.5 R⨁) have been confirmed. These planets, which are statistically about as common as stars, blur the boundary between terrestrial planets and gas giants.

A warm, Neptune-sized exoplanet orbits the red dwarf star GJ 3470. Intense radiation from the star heats the planet’s atmosphere, causing large amounts of hydrogen gas to stream off into space. Credit: NASA/ESA/D. Player (STScI)

Strong, albeit indirect, evidence indicates that the known sub-Neptunes are mostly magma by mass and mostly atmosphere by volume (for a review, see Bean et al. [2021]). This evidence implies that an interface occurs, at pressures typically between 10 and 300 kilobars, between the magma and the molecular hydrogen (H2)-dominated atmosphere on these planets. Interactions at and exchanges across this interface dictate the chemistry and puffiness of the atmosphere. For example, water can form and become a significant fraction of the atmosphere, leading to more chemically complex atmospheres.

Improved molecular dynamics calculations are needed to quantify the solubilities of gases and gas mixtures in realistic magma ocean compositions (and in iron alloys composing planetary cores, which can also serve as reservoirs for volatiles) over a wider range of pressures and temperatures than we have studied until now. These calculations should be backed up by laboratory investigations of such materials using high-pressure instrumentation like diamond anvil cells. These calculations and experiments will provide data to help determine the equation of state (the relationship among pressure, volume, and temperature), transport properties, and chemical kinetics of H2-magma mixtures as they might exist on these exoplanets.

Fig. 2. Ranges of plausible conditions at the interfaces between silicate surface rocks and volatile atmospheres on different types of worlds are indicated in this pressure–temperature (P-T) diagram. Conditions on Earth, as well as other relevant conditions (critical points are the highest P-T points where materials coexist in gaseous and liquid states, and triple points are where three phases coexist), are also indicated. Mg2SiO4 = forsterite, an igneous mineral that is abundant in Earth’s mantle.

Because sub-Neptunes are so numerous, we cannot claim to understand the exoplanet mass-radius relationship in general (in effect, the equation of state of planets in the galaxy) without understanding interactions between H2 and magma on sub-Neptunes. To understand the extent of mixing between H2, silicates, and iron alloy during sub-Neptune assembly and evolution, we need more simulations of giant impacts during planet formation [e.g., Davies et al., 2020], as well as improved knowledge of convective processes on these planets. Within the P-T-X regimes of sub-Neptunes, full miscibility between silicates and H2 becomes important (Figure 2).

Beyond shedding light on the chemistry and magma-atmosphere interactions on these exoplanets, new experiments may also help reveal the potential for and drivers of magnetic fields on sub-Neptunes. Such fields might be generated within both the atmosphere and the magma.

Hot and Rocky

Hot, rocky exoplanets experience high fluxes of atmosphere-stripping ultraviolet photons and stellar wind, but whether they retain life-essential elements like nitrogen, carbon, and sulfur is unknown.From statistical studies, we know that most stars are orbited by at least one roughly Earth-sized planet (radius of 0.75–1.6 R⨁) that is irradiated more strongly than our Sun’s innermost planet, Mercury. These hot, rocky exoplanets, of which about a thousand have been confirmed, experience high fluxes of atmosphere-stripping ultraviolet photons and stellar wind. Whether they retain life-essential elements like nitrogen, carbon, and sulfur is unknown.

On these hot, rocky exoplanets—and potentially on Venus as well—atmosphere-rock or atmosphere-magma interactions at temperatures too high for liquid water will be important in determining atmospheric composition and survival. But these interactions have been only sparingly investigated [Zolotov, 2018].

Many metamorphic and melting reactions between water and silicates under kilopascal to tens-of-gigapascal pressures are already known from experiments or are tractable using thermodynamic models. However, less well understood processes may occur in planets where silicate compositions and proportions are different than they are on Earth, meaning that exotic rock phases may be important. Innovative experiments and modeling that consider plausible exotic conditions will help us better understand these planets. Moreover, we need to conduct vaporization experiments to probe whether moderately volatile elements are lost fast enough from hot, rocky planets to form a refractory lag and reset surface spectra.

Exotic Water Worlds?

Water makes up about 0.01% of Earth’s mass. In contrast, the mass fraction of water on Europa, Ceres, and the parent bodies of carbonaceous chondrite meteorites is some 50–3,000 times greater than on Earth. Theory predicts that such water-rich worlds will be common not only in habitable zones around other stars but even in closer orbits as well. The JWST will be able to confirm or refute this theory [Greene et al., 2016].

If we could descend through the volatile-rich outer envelope of a water world, we might find habitable temperatures at shallow depths [Kite and Ford, 2018]. Some habitable layers may be cloaked beneath H2. Farther down, as the atmospheric pressure reaches 10 or more kilobars, we might encounter silicate-volatile interfaces featuring supercritical fluids [e.g., Nisr et al., 2020] and conditions under which water can be fully miscible with silicates [Ni et al., 2017].

We still need answers to several key questions about these worlds. What are the equilibria and rates of gas production and uptake for rock-volatile interfaces at water world “seafloors”? Can they sustain a habitable climate? With no land, and thus no continental weathering, can seafloor reactions supply life-essential nutrients? Do high pressures and stratification suppress the tectonics and volcanism that accelerate interior-atmosphere exchange [Kite and Ford, 2018]?

Relative to rock compositions on Earth, we should expect exotic petrologies on water worlds.As for the deep interiors of Titan and Ganymede in our own solar system, important open questions include the role of clathrates (compounds like methane hydrates in which one chemical component is enclosed within a molecular “cage”) and the solubility and transport of salts through high-pressure ice layers.

Experiments are needed to understand processes at water world seafloors. Metamorphic petrologists are already experienced with the likely pressure-temperature conditions in these environments, and exoplanetary studies could benefit from their expertise. Relative to rock compositions on Earth, we should expect exotic petrologies on water worlds—for example, worlds that are as sodium rich as chondritic meteorites. Knowledge gained through this work would not only shed light on exoplanetary habitability but also open new paths of research into studying exotic thermochemical environments in our solar system.

Magma Seas and Planet Disintegration

Some 100 confirmed rocky exoplanets are so close to their stars that they have surface seas of very low viscosity magma. The chemical evolution of these long-lived magma seas is affected by fractional vaporization, in which more volatile materials rise into the atmosphere and can be relocated to the planet’s dark side or lost to space [e.g., Léger et al., 2011; Norris and Wood, 2017], and perhaps by exchange with underlying solid rock.

Magma planets usually have low albedos, reflecting relatively little light from their surfaces. However, some of these planets appear to be highly reflective, perhaps because their surfaces are distilled into a kind of ceramic rich in calcium and aluminum. One magma planet’s thermal signature has been observed to vary from month to month by a factor of 2 [Demory et al., 2016], implying that it undergoes a global energy balance change more than 10,000 times greater than that from anthropogenic climate change on Earth. Such large swings suggest that fast magma ocean–atmosphere feedbacks operate on the planet.

To learn more about the chemical evolution and physical properties of exoplanet magma seas, we need experiments like those used to study early-stage planet formation, which can reveal information about silicate vaporization and kinetics under the temperatures (1,500–3,000 K) and pressures (10−5 to 100 bars) of magma planet surfaces.

Exoplanets and exoplanetesimals that stray too close to their stars are destroyed—about five such cases have been confirmed. These disintegrating planets give geoscientists direct views of exoplanetary silicates because the debris tails can be millions of kilometers long [van Lieshout and Rappaport, 2018]. For disintegrating planets that orbit white dwarf stars, the debris can form a gas disk whose composition can be reconstructed [e.g., Doyle et al., 2019].

To better read the signals of time-variable disintegration, we need more understanding of how silicate vapor in planetary outflows condenses and nucleates, as well as of fractionation processes at and above disintegrating planets’ surfaces that may cause observed compositions in debris to diverge from the bulk planet compositions.

Getting to Know the Cousins

Investigating Earth cousins will illuminate the processes underpinning habitability in our galaxy and reveal much that is relevant for understanding Earth twins.In the near future, new observatories like JWST and the European Space Agency’s Atmospheric Remote-sensing Infrared Exoplanet Large-survey (ARIEL, planned for launch in 2029) will provide new data. When they do, and even now before they come online, investigating Earth cousins will illuminate the processes underpinning habitability in our galaxy and reveal much that is relevant for understanding Earth twins.

From sub-Neptunes, for example, we can learn about volatile delivery processes. From hot, rocky planets, we can learn about atmosphere-interior exchange and atmospheric loss processes. From water worlds, we can learn about nutrient supplies in exoplanetary oceans and the potential habitability of these exotic environments. From disintegrating planets, we can learn about the interior composition of rocky bodies.

Laboratory studies of processes occurring on these worlds require only repurposing and enhancing existing experimental facilities, rather than investing in entire new facilities. From a practical standpoint, the scientific rewards of studying Earth cousins are low-hanging fruit.

Acknowledgments

We thank the organizers of the AGU-American Astronomical Society/Kavli workshop on exoplanet science in 2019.

Modeling Urban-weather Effects Can Inform Aerial Vehicle Flights

EOS - Wed, 06/09/2021 - 13:06

New modes of aerial operations are emerging in the urban environment, collectively known as Advanced Air Mobility (AAM). These include electrically propelled vertical takeoff and landing aerial vehicles for infrastructure surveillance, goods delivery, and passenger transportation. However, ultra-fine weather and turbulence guidance products are needed that contribute to safe and efficient deployment of these activities. In fact, initial testing/demonstration exercises are planned to occur in the very near future, thus the timely and relevant nature of the present work.

To enable successful operation of these new aerial operations in the urban environment, the meteorological community must provide relevant guidance to inform and support these activities. Muñoz-Esparza et al. [2021] demonstrate how seasonal, diurnal, day-to-day, and rapidly evolving sub-hourly meteorological phenomena create unique wind and turbulence distributions within the urban canopy. They showcase the potential for efficient ultra-fine resolution atmospheric models to understand and predict urban weather impacts that are critical to these AAM operations.

Citation: Muñoz-Esparza, D., Shin, H., Sauer, J. et al. [2021]. Efficient GPU Modeling of Street-Scale Weather Effects in Support of Aerial Operations in the Urban Environment. AGU Advances, 2, e2021AV000432. https://doi.org/10.1029/2021AV000432

—Donald Wuebbles, Editor, AGU Advances

Raising Central American Orography Improves Climate Simulation

EOS - Wed, 06/09/2021 - 13:05

Global Climate Models (GCMs) suffer from the tropical rainfall bias, with double peaks on both sides of the equator rather than just north of the equator, known as the double Inter-Tropical Convergence Zone (ITCZ) bias. The tropical mean state bias limits the fidelity of GCMs in projecting the future climate. Much effort has gone into improving this double ITCZ bias, but it has not been alleviated since the early days of model development.

Baldwin et al. [2021] suggest that a significant portion of the double ITCZ bias originates from low biases in Central American orography in models. Orographic peaks are often smoothed out in models that use observed orography averaged onto model grid. Elevation of Central American orography is demonstrated to reduce the double ITCZ bias as the northeastern tropical Pacific becomes warmer owing to blocked easterlies. The study offers a simple and computationally inexpensive yet physically based method for improving pervasive double ITCZ bias.

Citation: Baldwin, J., Atwood, A., Vecchi, G. and Battisti, D. [2021]. Outsize Influence of Central American Orography on Global Climate. AGU Advances, 2, e2020AV000343. https://doi.org/10.1029/2020AV000343

The Earth in Living Color: Monitoring Our Planet from Above

EOS - Wed, 06/09/2021 - 12:18

For more than five decades, satellites orbiting Earth have recorded and measured different characteristics of the land, oceans, cryosphere, and atmosphere, and how they are changing. Observations of planet Earth from space are a critical resource for science and society. With the planet under pressure from ever-expanding and increasingly intensive human activities combined with climate change, observations from space are increasingly relied upon to monitor and to inform adaptation and mitigation activities to maintain food security, biodiversity, water quality, and responsiveness to disasters.

A new cross-journal special collection, The Earth in Living Color, aims to provide a state-of-art and timely assessment of how advances in remote sensing is revealing new insights and understanding for monitoring our home planet.  We encourage papers that cover the use of imaging spectroscopy and thermal infrared remote sensing to observe and understand the Earth’s vegetation, coastal aquatic ecosystems, surface mineralogy, snow dynamics, and volcanic activity. These may range from architecture studies that determine spaceborne measurement objectives, to papers on algorithm development, calibration and validation, and modeling to support traceability. Papers can be submitted either to Journal of Geophysical Research: Biogeosciences or Earth and Space Science.

The special collection is associated with the NASA Surface Biology and Geology Designated Observable (SBG), and will document:

how SBG will meet science and applications measurement objectives; how international partnerships (with the European Space Agency’s Copernicus Hyperspectral Imaging Mission (CHIME) and Land Surface Temperature Monitoring mission (LSTM) and with the Centre National d’Études Spatiales (CNES) and Indian Space Research Organization’s (ISRO) Thermal infraRed Imaging Satellite for High-resolution Natural resource Assessment mission (TRISHNA) will improve revisit times; describe new developments in atmospheric correction, surface reflectance retrievals, and algorithms; and detail synergies with other NASA Decadal Survey missions.

SBG leverages a rich heritage of airborne imaging spectroscopy that includes the AVIRIS and PRISM instruments, and thermal imagers such as HYTES and MASTER, as well space-based observations from pathfinder missions such as HYPERION, and current missions, including ECOSTRESS, PRISMA, DESIS, and HISUI.

Satellite measurements represent very large investments and the United States and space agencies around the globe organize their efforts to maximize the return on that investment. For instance, the US National Research Council conducts a decadal survey of NASA earth science and applications to prioritize observations of the atmosphere, ocean, land, and cryosphere. The most recent NASA Decadal survey, published in 2017, prioritized observations of surface biology and geology using a visible to shortwave infrared (VSWIR) imaging spectrometer and a multi-spectral thermal infrared (TIR) imager to meet a range of needs. As announced by NASA in May 2021, SBG will become integrated within a larger NASA Earth System Observatory (ESO)  that will include observations of aerosols, clouds, convection, and precipitation, mass change, and surface-deformation and change.

The SBG science, applications and technology build on over a decade of experience and planning for such a mission based on the previous Hyperspectral Infrared Imager (HyspIRI) mission study. During the course of a three-year study (2018-2021), the SBG team analyzed needed instrument characteristics (spatial, temporal and spectral resolution, measurement uncertainty) and assessed the cost, mass, power, volume, and risk of different architectures. The SBG Research and Applications team examined available algorithms, calibration and validation, and societal applications, and used end-to-end modeling to assess uncertainty.  The team also identified valuable opportunities for international collaboration to increase the frequency of revisit through data sharing, adding value for all partners. Analysis of the science, applications, architecture, and partnerships led to a clear measurement strategy and a well-defined observing system architecture.

SBG addresses global vegetation, aquatic, and geologic processes that quantify critical aspects of the land surface, responding to NASA’s Decadal Survey priorities, which then interact with the Earth’s climate system. The SBG observing system has a defined set of critical observables that equally inform science and environmental management and policy for a host of societal benefit areas. Click image for larger version. Credit: NASA JPL

First, and perhaps, foremost, SBG will be a premier integrated observatory for observing the emerging impacts of climate change. It will characterize the diversity of plant life by resolving chemical and physiological signatures. It will address wildfire, observing pre-fire risk, fire behavior and post-fire recovery. It will provide information for the coastal zone on phytoplankton abundance, water quality, and aquatic ecosystem classification. It will inform responses to natural and anthropogenic hazards and disasters guiding responds to a wide range of events, including oil spills, toxic minerals, harmful algal blooms, landslides and other geological hazards, including volcanic activity.

The NASA Earth System Observatory initiates a new era of scientific monitoring, with SBG providing an unprecedented perspective of the Earth surface through new spatial, temporal, and spectral information with high signal-to-noise. The Earth in Living Color special collection will showcase the latest advances in remote sensing that are providing vital insights into changes in planet Earth.

—David Schimel (david.schimel@jpl.nasa.gov,  0000-0003-3473-8065), NASA Jet Propulsion Laboratory, USA; and Benjamin Poulter ( 0000-0002-9493-8600), NASA Goddard Space Flight Center, USA

Siltation Threatens Historic North Indian Dam

EOS - Wed, 06/09/2021 - 12:15

When it opened in 1963, Bhakra Dam was called a “new temple of resurgent India” by Jawaharlal Nehru, India’s first prime minister. Today the dam is threatened as its reservoir rapidly fills with silt.

Much to the worry of hydrologists monitoring the situation, the reservoir—Gobind Sagar Lake—has a rapidly growing sediment delta that, once it reaches the dam, will adversely affect power generation and water deliveries.

Bhakra Dam stands 226 meters tall and stretches 518 meters long, making it one of the largest dams in India. Electricity generated by the dam supports the states of Himachal Pradesh (where the dam is located), Punjab, Haryana, and Rajasthan, and the union territories of Chandigarh and Delhi. The reservoir supplies these areas with water for drinking, hygiene, industry, and irrigation. Loss of reservoir capacity as a result of sedimentation could thus have severe consequences for the region’s water management system and power grid.

A Leopard’s Leap to a Green Revolution

In 1908, British civil services officer Sir Louis Dane claimed to have witnessed a leopard leaping from one end of a gorge on the Sutlej River to the other. “Here’s a site made by God for storage,” he wrote. Little happened, however, until 40 years later, when Nehru took up the proposal as one of the first large infrastructure projects in India after independence.

“Before the canal brought water to our area, we were poor [and] used to live [lives] of nomads, in the sand dunes. Now we grow a variety of crops…and we are referred [to] as affluent farmers.”Bhakra Dam’s waters quickly catalyzed the nation’s green revolution of increased agricultural production. In the early 1960s, for instance, 220,000 hectares of rice were under paddy cultivation in Punjab. Within 10 years, that number increased to 1.18 million, which doubled by 1990. Today Punjab contributes up to 50% of India’s rice supply.

Parminder Singh Dhanju, a rural resident of Rajasthan whose village is about 565 kilometers from Bhakra Dam, has a farm fed by canals originating from the reservoir. “The water availability has changed the lives of us villagers,” he said. “Before the canal brought water to our area, we were poor [and] used to live [lives] of nomads, in the sand dunes. Now we grow a variety of crops such as wheat, rice, cotton, and citrus fruits (oranges and kinnows), and we are referred [to] as affluent farmers.”

The Saga of Silt

According to investigations led by D. K. Sharma, former chairman of the Bhakra Beas Management Board (BBMB, the power company responsible for the dam), nearly a quarter of Gobind Sagar Lake has filled with silt. The sedimentation flows from the lake’s catchment areas, which are spread over 36,000 square kilometers in the Himalayas.

“The storage of the reservoir is 9.27 billion cubic meters, out of which 2.13 billion cubic meters are filled with silt, which is an alarming situation,” explained Sharma. He said the studies related to silt pileup are carried out every 2 years.

Sharma and other BBMB engineers submitted a report last year on siltation at Bhakra Dam. In it, Sharma said the dam was projected to be an effective reservoir for at least 100 years. However, he explained, the silt buildup will likely shorten that time frame. “It depends on the amount of silt in the reservoir,” he said. “The increase in siltation will hasten the process of turning the dam into a dead project, making the canal system downstream vulnerable to deposition of silt and floods.”

The Way Out

To combat siltation, Sharma suggested extensive reforestation in the reservoir’s catchment area. “The partner states of BBMB—Punjab, Haryana, Rajasthan, and Himachal Pradesh—need to plan forestation to bind the loose soil,” he said.

“If we can reduce silt inflows by 10%, the dam’s life can be extended by 15–20 years,” he added.

“We need to act fast and engage local population and NGOs to carry out plantation, before it’s too late.”BBMB joint secretary Anurag Goyal heads the reforestation project around the dam. He said that in 2019, 600,000 saplings were planted over the reservoir’s catchment area. “We have resumed plantation that was temporarily halted in 2020 due to COVID-19 pandemic.”

Other suggestions to prevent or mitigate siltation include dredging the reservoir, although Goyal dismisses that idea as cost prohibitive. Goyal agreed with Sharma that reforestation or other mitigation projects must include local governments. “Reforestation over [such a] vast area needs a road map and the involvement of the north Indian states…. We need to act fast and engage local population and NGOs to carry out plantation, before it’s too late.”

—Gurpreet Singh (@JournoGurpreet), Science Writer

Gulf Stream Intrusions Feed Diatom Hot Spots

EOS - Wed, 06/09/2021 - 12:09

The Gulf Stream, which has reliably channeled warm water from the tropics northward along the East Coast of North America for thousands of years, is changing. Recent research shows that it may be slowing down, and more and more often, the current is meandering into the Mid-Atlantic Bight—a region on the continental shelf stretching from North Carolina to Massachusetts and one of the most productive marine ecosystems in the world.

Previous studies have suggested that this intrusion of Gulf Stream water, which is comparatively low in nutrients at the surface, could hamper productivity. But in a new study, Oliver et al. found that intrusions of deeper, nutrient-rich Gulf Stream water can also feed hot spots of primary productivity.

By analyzing data collected by R/V Thomas G. Thompson in July of 2019, the team spotted a series of hot spots about 50 meters below the surface, just east of a large eddy known as a warm-core ring. This ring had formed off the side of the Gulf Stream current and was pushing westward toward the continental shelf, drawing cool water into the slope region off the edge of the shelf.

The hot spots had chlorophyll levels higher than those typically seen in the slope region and were packed with a diverse load of diatoms, a class of single-celled algae. Studying images of the hot spots, the team found that the colony-forming diatom Thalassiosira diporocyclus was an abundant type in the hot spots.

The researchers used a model that combined upper ocean and biogeochemical dynamics to support the idea that the upwelling of Gulf Stream water moving northward into the Mid-Atlantic Bight could cause the hot spots to form. The study demonstrates how Gulf Stream nutrients could influence subsurface summer productivity in the region and that such hot spots should be taken into account when researchers investigate how climate change will reshape circulation patterns in the North Atlantic. (Geophysical Research Letters, https://doi.org/10.1029/2020GL091943, 2021)

—Kate Wheeling, Science Writer

Front Cover

Presents the front cover for this issue of the publication.

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer