Tengo un amigo que dice que Murphy se pasea en bicicleta por los pasillos de un Datacenter, y es verdad. Todo aquello que puede ir mal, irá mal y peor. Y por ello, es bueno anticiparse, y probar los sistemas. Y esto último es lo que se hace poco: ensayar y probar nuestra capacidad de "Resilience", para ver si es real o simplemente se siguió un manual para implantarla y ya está.
A lo largo de mi carrera, he visto muchos CPDs, muchas instalaciones, y debo decir, que esta capacidad de "Resilience", es un tigre de papel. Muchos CIOs están apagando fuegos a cuatro manos, y si les preguntas sobre su "Resilience", arquean los ojos, te miran con cariño, y te contestan otra cosa. Y es lógico. Llevamos ya muchos años de crisis y de restricciones, pero también de explosión de la importancia de los sistemas, sin ellos casi ninguna empresa existiría. Y sin embargo, pocos valoran la posibilidad de preparar un desastre, probando sus capacidades, con un simulacro controlado.
Por ello, me ha gustado este artículo de la preparación de Netflix ante una caída seria de Amazon, su proveedor Cloud. Leedlo y me decís qué pensáis...
Revista CIO, 3/10/2014.
Sometimes the best path to success is to learn how to avoid failure.
Netflix was able to keep serving its customers while its cloud hosting provider, Amazon Web Services (AWS), rebooted servers, because it had prepared for that happening.
“When we got the news about the emergency EC2 [Elastic Cloud Compute] reboots, our jaws dropped. When we got the list of how many Cassandra nodes would be affected, I felt ill,” said Christos Kalantzis, Netflix engineering manager of cloud database engineering, in a Netflix blog post discussing the outage.
Amazon announced to EC2 customers on Sept. 25 that it would be updating its servers and that a small percentage would require a reboot, which could potentially disrupt customer services. AWS did not specify which of their virtual hosts would be rebooted or when. It was revealed later that AWS was fixing a vulnerability in the Xen hypervisor, which underpins EC2.
Netflix is one of Amazon’s largest customers. And its 50 million customers expect to be able to stream TV shows, movies and other content at any time. If Netflix wasn’t prepared to mitigate potential outages, the company—and not Amazon—would have a lot of angry customers.
But Netflix had architected its service to be resilient, so that if one Amazon data center went down, operations could be switched over to another with barely a noticeable bump to customers. It also looked for ways to minimize downtime that occurred when its services did need to be rebooted.
The company even went the extra mile and aggressively looked for ways to try to disrupt its own services through a set of tools called the Simian Army that are designed to periodically and randomly kill Netflix services. The thinking goes that any Netflix service should be resilient enough to keep running through an attack from one such tool. If it isn’t, then the Netflix engineers redesign the service to make it more reliable.
Even with its systems hardened by abuse from Chaos Monkey and other Simian Army tools, the engineers were still worried about the AWS reboot.
In particular, concern centered around the 2,700 Cassandra databases that the company runs on AWS.
Databases, as the blog post pointed out, are “the pampered and spoiled princes of the application world.” They are run on the best hardware, get lots of attention from database engineers and still can be fussy creatures.
Netflix deliberately chose to use the Cassandra database over more traditional choices such as Oracle’s databases because, as a NoSQL database, Cassandra could be spread across multiple servers in such a way that if one of the nodes failed, the database could keep running. Over the past year, the company had been subjecting Cassandra to Chaos Monkey testing, with promising results.
The AWS reboot would be the first true test of Cassandra’s reliability, however. The entire cloud database engineering team was on alert.
In the end, and thanks to Chaos Monkey testing, most all of the Cassandra nodes remained online. Of the 218 Cassandra nodes that were rebooted, only 22 did not return to a full operational state, and those were successfully restarted with minimal human intervention.
“Repeatedly and regularly exercising failure, even in the persistence layer, should be part of every company’s resilience planning,” the blog concluded. “If it wasn’t for Cassandra’s participation in Chaos Monkey, this story would have ended much differently.”