Skip to main content

The phantom earthquake: when automation goes wrong

news #tech

Who’s to blame: man or machine?

On 21 June 2017, a software bug resulted in an earthquake from 1925 being reported as a current event. The incident emphasised the need for a human element in a world becoming increasingly reliant on automation.

The series of events was something you’d expect at the start of a Netflix-funded Adam Sandler disaster flick.

The story began with a simple update to information about an earthquake that hit Isla Vista, California back in 1925.

Seismologists had complained the existing data could be inaccurate by as much as six miles, so the US Geographical Survey (USGS) was attempting to amend the location of the quake.

The change was made, but it unexpectedly triggered an email alert warning of a new 6.8 quake. This email alert system is used to fire off warning messages to those signed up to the service, typically arriving in user inboxes only minutes after a quake strikes.

Things got worse when the LA Times published an article about the apparent earth-shaking event. The publication uses an automation tool named Quakebot to write articles based on the information contained in the USGS alert.

Adding to the farce was a Tweet published by the LA Times, which was automatically sent out when the article was published. Oops.

Tweet by @LANow

The LA Times says articles written by Quakebot are not auto-published.  A human editor must approve each story before it appears on the website. In this instance, five years of using Quakebot without error seems to have led to complacency, as a member of staff apparently trusted the report and hit “publish”.

“Quakebot posts are single-source, relying on information provided by the US Geological Survey,” a spokesperson for the LA Times told iMediaEthics. When only a single source is available, there’s always the possibility that an error will filter down the chain.

As for the original quake report, the USGS has said it believes a bug in its software saw the updated quake misinterpreted “as a current event”.

The biggest sign that the quake wasn’t real (besides no one in the area feeling the tremor) was that the alert indicated the earthquake occurred on June 29, 2025, at 7:42am. This was missed by the automated systems. The USGS has now taken steps to ensure this doesn’t happen again.

Bob de Groot from the ShakeAlert Earthquake Early Warning Program and USGS, told iMediaEthics: “It was a learning experience for us. There is a fix that is in place now that will keep this from happening.”

He added that the USGS has introduced a “human element” to make sure future alerts are not dispatched without being seen by a human first.

Automation enabled by computers can make our lives considerably easier. The difficulty comes in determining how much, if any, human element is needed to keep these automations in check.

A very basic form of automation could be scheduling a post on Twitter. Picture a scenario where you write a Tweet about dealing with a heatwave, and set it to publish on Wednesday morning. However, the hot weather doesn’t arrive as predicted and you wake up to rain.

In this instance the user will have to step in to prevent the social media post from going live. There isn’t any advanced thinking in place by the software that assesses both the contents of the Tweet and external elements that determine whether the post remains relevant at the time of publication.

In this situation, nothing bad or overly damaging would happen if the Tweet was published as scheduled – just a few red faces.

Far more serious was the very real 1983 Soviet nuclear warning false alarm, which could have ended the world as we knew it.

The Soviet’s nuclear early warning system reported the launch of multiple ballistic missiles from bases in the US, an attack which would ordinarily be responded to with extreme force. Nuclear war would have followed.

The planet has Soviet Air Defence Forces officer Stanislav Petrov to thank for delaying his response and correctly identifying the warning as a false alarm.

Had the early warning system been given the authority to call for a retaliatory attack without human authorisation, the situation would have panned out very differently.

The error was caused by a rare alignment of elements that caused the sunlight reflected by high altitude clouds to be misinterpreted as missiles.

So who’s to blame when automation goes wrong?

Whether someone was too trusting of tech, or there were holes in the computer programming, ultimately the fault lies with humans.

Automation, when done right, is amazing. But right now, we must maintain a balance between man and machine. Tech is a tool, and we continue to be a vital - albeit fallible - cog in an automated world.