Increased error rates
Incident Report for Elastic Path
Postmortem

On August 27th, our engineers were alerted to an increase in error responses from the API. On investigating they identified that multiple services were seeing either increased error rates or increased response times. There was also a large backlog of webhooks to be processed. We quickly tied these issues to extremely high CPU & memory usage on one of our MongoDB clusters, specifically affecting the catalogue service database. Other services were affected when they needed to interact in some way with the catalogue service.

Although traffic levels were no higher than usual and were remaining steady, we could not see any sign of CPU/Memory usage dropping. Further investigation showed that we were exhausting the write ticket limit on the primary node. This was causing new write queries to be queued which depleted CPU/Memory and meant we were using resources on non query related workloads reducing the query throughput we were able to handle.

Although this was not a major outage, it was having an effect on some API calls. To rectify the issue, we worked to reduce the CPU on the primary node so that we were in a position where we could increase the available resources on all nodes. Once the CPU usage was manageable, we replaced all nodes in the cluster, each node now has twice as much CPU and RAM available. We're planning on leaving the larger servers in place for the foreseeable future which will mean this issue is unlikely to be repeated whilst we work on identifying and correcting the problem queries. We're also planning on adding additional monitoring to ensure we catch any problems like issue before it has an impact on responses to end users.

Retrospectively, we identified that the initial spike was caused by a large influx of write queries combined with some resource intensive read queries. The primary node quickly got itself into a position where it was unable to recover completely. The issue with webhooks being delayed was deemed to have been caused by the initial influx of write queries, rather than the database issue as we originally thought. The backlog of webhooks was so large that the system processing them took much longer than usual to get through. We've since increased the throughput of this system so that it is much less sensitive to any increases in incoming events.

Posted Aug 30, 2019 - 16:39 UTC

Resolved
The affected database nodes have now been replaced and all outstanding webhooks have been processed.
Posted Aug 27, 2019 - 15:16 UTC
Update
Requests are being handled properly and error rates are no longer elevated. We are replacing some database hosts to reduce the probability of this happening again. Webhooks are being processed successfully, however, there are delays as we work through the backlog.
Posted Aug 27, 2019 - 11:37 UTC
Update
Error rates have now returned to normal, we're continuing to work on the underlying issue. There may be delays with some webhooks being sent.
Posted Aug 27, 2019 - 10:45 UTC
Identified
We have identified a database issue as the root cause and are working to fix the issue
Posted Aug 27, 2019 - 09:51 UTC
Investigating
We're investigating increased error rates from the API.
Posted Aug 27, 2019 - 09:44 UTC
This incident affected: EU (EU Shopper/Storefront API, EU Webhooks).