Hazard Assessment Part 1: Earthquakes

Seismic hazard probability map for Italy following the 2009 L’Aquila quake. Source: USGS

The conference I was at over the last two weeks was notable for a very large number of sessions on hazard mitigation.  Volcanoes, earthquakes, tsunamis, floods, storms, and even asteroid impacts all fell under the umbrellas of the various branches of geology and physics represented.  Certainly, reducing the risk these events pose to populations is a huge part of why many governments fund a lot of earth science research, and it’s heartening to see progress in some areas.  There are, however, a number of things which concern me.

Primary among these concerns is the communication gap between scientists and the public – a gap which the media should be filling but make a terribly awful job of. One of the things any geoscientist has to get their head around in order to succeed is the concept of probability, and it is a subject which human beings as a whole are truly terrible at. When discussing the probability of geological events the vast majority of people have difficulty really understanding what geological timescales are, and how long a million years really is.  This is compounded by the fact that most people don’t worry too much about things if people within their own lifetime have not suffered them – it’s like the classic ‘7-second memory’ joke about a goldfish swimming around a bowl and thinking everything is constantly new.

Now, I’m not going to get into the meat and potatoes of risk analysis – it’s far too complex a subject to deal with in any quality here, and besides, I’m not sure that I understand its foibles well enough to really comment on the details.  What I will say is that any form of risk analysis requires an understanding of the system you are analysing.  The better you understand the system, the better your risk analysis.

So, for example, what governments really want is for their geoscientists to provide them with is information on when something is going to happen, what the effected region will be, and what the likely impacts will be.  That allows them to plan a response beforehand. So scientists require a complete understanding of the particular system they are concerned with, from which they can then draw detailed hazard maps.

Reality, however, is not quite so simple.

Let me bring in here the case of the L’Aquila earthquake in Italy, April 2009. An extensional fault at about 9.5km depth activated at half past three in the morning, triggering a 6.3 Magnitude earthquake which killed 308 people, and left 65,000 people homeless.  Despite its relatively small impact compared to some more recent events, it has become one of the most controversial earthquakes in history.

The reason it is so controversial is that a lab tech by the name of Giampaulo Giuliani went on television about a month beforehand and predicted a large earthquake.  His claim was based on measurements of radon gas emission, which might be expected to seep up from faults. However, the important point here is that this particular forecasting method has shown no real value in the past.  As a result his warning was ignored, and in fact he was reported to the police for “causing fear”.  It should also be pointed out that Giuliani has not published anything on his measurement methods, data, analysis or stats which would support his claim.  He predicted a quake on the 29th March, which rolled around, and nothing happened.  The head of the civil protection agency heated up the situation by criticising “imbeciles who spread false information”, which might have been the end of it had the earthquake of the 6th April not occurred.

The point here is that a researcher measured a thing which is not recognised as being indicative of a quake, predicted a quake on a specific date, was wrong, and then a big quake happened in the area a few days after his prediction (although over 50km from where he predicted).  The question, then, is whether his measurements and prediction were good, but he made the mistake of naming a specific date, or whether he just got ‘lucky’.  It’s an interesting and important question for a number of reasons:

1. To the best of our understanding, the precise activation time on  fault is unknowable without knowing precisely what the stresses and strains are at every point along the fault, what the material either side of the fault is at every point along it, and precisely what the 3D shape of the fault is at every point underground.  We are orders of magnitude off knowing this – we’re lucky if we know the fault is even there and which way it moves.

2. All earthquake prediction research (as a result of point 1) is focussed on either long term probability studies, or prediction in the seconds before an event by looking at deformation of the ground.  This might be useful in remote triggering of emergency systems immediately before a large event (e.g. shutting down nuclear plants safely). If there is a chance that some of these previously dsicarded methods are of use, then that is a Very Big Thing Indeed.

3. And perhaps the greatest problem in this case – seven scientists and technicians who were responsible for looking at the seismic data in the period before the earthquake are being tried for manslaughter. This, then, is why the case has stirred up so much controversy.  The Italian prosecutors believe that we are able to predict earthquakes, and Giuliano by fluke or foul appears to have done just that.  He also hasn’t helped the situation by making claims such as “We have been able to predict these kind of events for 10 years”.

The general public tend to believe what they’re told in science related fields (with a few notable exceptions), in the same way we tend to believe medical diagnoses or the advice of any other specialist in a field.  When there are two conflicting scientific opinions, the problem becomes that the press are often all too keen to represent both arguements as equal, without any consideration for the relative merits of each.  The type example of this is the creationism/evolution debate in which people are encouraged to teach the ‘controversy’, as if one arguement has equal weight to the other in a science-based education.  Which is why we end up with shit like this:

Maybe Giuliano hit on something – maybe in that particular fault system radon can act as an indicative precursory emission. However, his subsequent lack of data publication, and his outlandish claims about our ability to predict earthquakes tend to support the conclusions of the initial review panel that the earthquake was unpredictable.  The 55 km spatial error is also significant, as it would suggest that even if the radon measurements were valid, they were still a long way from the actual fault slip zone. The fact that the Italian government are now attempting to prosecute scientists for failing to predict the unpredictable is disturbing in the extreme.  The hazard potential in the area was known and dealt with.  The day geologists can tell you exactly when an earthquake is going to happen is a long way off.

About Pete Rowley

Earth Scientist with a background in volcanology and sedimentology. Enjoys a good rant, beer, and games. Dislikes reality TV, crowds, and unreasonable people.
This entry was posted in Earthquakes, Geology, Geophysics, Hazard Assessment, Media & Perception, Science and tagged , , , , , , , , , , . Bookmark the permalink.

3 Responses to Hazard Assessment Part 1: Earthquakes

  1. Pingback: Hazard Assessment Part 2: Volcanic flows « geologygeek

  2. Pingback: The insanity of the Italian legal system | lithics

  3. Pingback: What’s up? The Friday links (46) | paleoseismicity.org

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s