How the PNSN is preparing for the next big earthquake

The PNSN seismo lab in 2000 commemorating the 20th anniversary of the Mount Saint Helens seismic sequence.

The PNSN seismo lab in 2000 commemorating the 20th anniversary of the Mount Saint Helens seismic sequence. 
 

The last earthquake to cause significant damage in the PNSN monitoring region – all of Washington and Oregon – took place more than a generation ago (20 years): the M6.8 Nisqually earthquake. Since then, much has changed in the field of regional seismic hazard monitoring.

On the one hand, the regional monitoring capabilities have expanded hugely: more and better digital stations, better data transmission, faster and more powerful computers and processing software, more PNSN staff and funding. The pace of these enhancements has accelerated over the past 5 years with the advent of the now-operational ShakeAlert Earthquake Early Warning system which, on its own, has seen the addition of more than 150 regional high quality seismic monitoring stations in the region, and more than a handful of new personnel to install and operate the new gear. On the other hand, there have been very few earthquakes large enough to be felt, let alone generate strong shaking. And as a result, our populace has lost conditioning and readiness (like an athlete benched for too long). And even within PNSN we question how to best employ all our new assets when the next big shake comes.

In this dispatch, I’ll summarize what we at the PNSN are doing to test and improve our anticipated response in the face of this unusual (although not abnormal) prolonged seismic quiescence. First, we are reviewing our goals and priorities. Second, we are reviewing and revising our procedures considering our new capabilities and organization. Third, we are stress-testing our revised and updated procedures and plans. Let’s take each of these in turn.

Deploying PNSN portable array instruments at the Rattlesnake Landslide in 2018. The main offset shows the stable (above) and active (below) portions of the ground.

Deploying PNSN portable array instruments at the Rattlesnake Landslide in 2018. The main offset shows the stable (above) and active (below) portions of the ground. 

 

In general, the PNSN's goals during an earthquake response can be categorized as either providing relevant accurate information to our stakeholders (officials, media/public, sponsors, etc.) or maintaining the systems and procedures that produce that information. Aligning our efforts to address and prioritize these goals is critical so that during the next seismic crisis we are focused not just on what we can do but are clear-headed about what we need to do.

We review and test our system with a series of earthquake “drills”. All hands participate in these drills. They start with a plausible scenario, for example a moderate (M 5.5) Puget Sound earthquake. We do a “tabletop” walk-through of anticipated impacts and network performance and operations in response to the scenario. This first run-through is about how things should go and includes a deep menu of concerns: anticipated data flow; product generation (e.g., earthquake origin information, ShakeAlert, ShakeMap, etc.) and delivery; message development; contact and coordination with external partners (e.g., emergency management agencies); data quality and availability; aftershock tracking; our internal coordination and communication, and media relations (traditional and social). It’s a long list and in the heated moments after a damaging earthquake that could happen any time of day, we expect it to be fraught. With what we learn from this drill we revise any policy or procedure as needed and update our documentation.

Finally, we stress-test our procedures with a more realistic drill where a drill “MC” uses a script of our scenario but throws monkey-wrenches into scenario. What if a telemetry problem has taken out a section of the network? What if one of our main computers fails? What happens if we find we’ve published a bad magnitude or duplicate location for an event? What if our website is brought down from too many requests? How do we prioritize and effectuate data recovery if telemetry a critical station is lost? What do we do if we can’t use our usual Zoom and Slack channels to coordinate internally? How do our procedures and plans hold up in the face of (at least) the challenge we anticipate?  We invite our partners and stakeholders to participate in these drills so that they know what to expect and may discover any issues that might plague their use of our information.

Hopefully our testing regimen will lead to better performance, more reliable operation, and less stress and worry, letting us focus on our ongoing and future growth and enhancements, next round of testing and so ever onward and upward!