Problem Statement
At 3rd&Urban, and in particular amp.fm (3rd&Urban is the parent company of amp.fm), our entire platform is built on top of Amazon Web Services products such as EC2, S3, and SimpleDB and driven by community-created content and interaction. Due to the nature of computer hardware -- especially those with moving parts -- while complete failure of an entire system is unlikely, failure of individual components within that system such as power sources and supplies, network cards, switches and routers, hard drives, processors, memory, and other components with an understood life expectancy is considered normal, if not rare, behavior. However, failure of any given component which results in outages which have crippling effects on the continued operations of the entire system are considered catastrophic. Designing and building fault-tolerance into any given system is critical to ensure that you always have back-up components in place to fall back on during an outage or failure of any given system component. Like any other data and community-centric company, we are committed to reducing the chance of a catastrophic system-wide failure to as close to zero as can be considered reasonable given understood component failure rates and unforeseen catastrophic events such as natural disasters.
While EC2 facilitates the ability to both add and replace instances on the fly, during the failure of an instance, at present time, any data on these instances that is not properly backed up will be lost. While backing up data to S3 is standard practice, data backups do not guarantee uninterrupted read/write access to that data, only the ability to recover from catastrophic failure, a process which, depending on the size of the data set, can take anywhere from a few minutes to a few hours to rebuild the effected data components. This time frame can potentially be even longer for data sets of considerable size and data structure complexity. As it relates to maintaining an always on, always accessible web business, we consider this a completely unacceptable scenario to potentially find ourselves faced with. As such, at the center of our system architecture resides a foundation of fault-tolerance techniques designed to ensure data persistence, redundancy, network accessibility, and automatic fail-over which, when combined together with off-the-shelf, open source software components, provides reasonable assurance of maintaining close-to-100% system up-time regardless of the failure of individual system components.Solution Summary
Amazon Web Services recently announced they are actively working on providing persistent storage as part of their EC2 offering, aiming to launch this service later this year. From the previously linked EC2 forum entry the Amazon EC2 team provides the reasoning behind this pre-beta release announcement,
"Many of you have been requesting that we let you know ahead of time about features that are currently under development so that you can better plan for how that functionality might integrate with your applications. To that end, we would like to share some details about a major upcoming feature that many of you have requested - persistent storage for EC2."
Speaking directly to,
"... so that you can better plan for how that functionality might integrate with your applications..."
... the primary focus of this paper is to present both a detailed overview as well as a working code base that will enable you to begin designing, building, testing, and deploying your EC2-based applications using a generalized persistent storage foundation, doing so today in both lieu of and in preparation for release of Amazon Web Services offering in this same space.
PLEASE NOTE: I have used generalized assumptions related to persistent storage solutions during the writing of this paper. Some of these assumptions extend from information that has been made public by AWS. I'll provide a summary of both the official announcement as well as Jeff Barr's (AWS Technical Evangelist) blog entry related to their persistent storage offering in the section that follows.
DISCLAIMER: There is no known direct or indirect connection between the material presented in this paper and the AWS persistent storage solution. While there is no reason to believe the same generalized ideas and technologies contained in this paper will be incompatible with Amazon's persistent storage offering when it becomes publicly available later this year, there is no guarantee this will be the case. While designing, building, testing, and deploying applications using the methodologies outlined in this paper, please do so with the understanding that you may have to re-design, re-build, re-test, and re-deploy certain aspects of (this|these) application(s) to take full advantage of the features and functionality provided by the public release of Amazon's persistent storage solution.
Please keep in mind, however, that regardless of any extended features and/or functionality introduced as part of the Amazon's public persistent storage release, the technologies and techniques describe in this paper will continue to work standalone, as-is.
The Solution
To ensure a proper understanding of what this solution provides and what it does not provide, the following two sections are a comparison of the publicly announced features of Amazon's persistent storage solution,
Features This Solution Provides
The following features are provided as part of this solution,
- Data redundancy via near-real-time synchronization of two block devices contained on two separate EC2 nodes using DRBD.
- Network mountable shares (NFS ) which provides the ability to mount these shares on more than one EC2 node at a time.
- Automatic fail-over between the primary and secondary DRBD nodes.
- Automatic and transparent remapping and remounting of an NFS share during the fail-over process.
- The ability to create snapshots of your volumes and back them up to Amazon S3.
- The ability to increase or reduce the size of any given volume that is part of the configuration, limited only by disk availability and capacity.
- At present time disk availability refers to the additional ephemeral block devices contained on m1.large and m1.x-large instance types.
- As already specified, while there are no guarantees this will work, in theory it will be possible to extend a logical volume with additional EC2 persistent storage block devices when this service becomes available.
Features This Solution Does NOT Provide
The following features are NOT provided as part of this solution,
- Highly durable persistent storage block devices that live independently of any given EC2 instance.
- The ability to create volumes ranging in size from 1 GB to 1 TB.
- Using LVM, it is possible to create logical volumes that range from 1k to the maximum capacity of your available ephemeral block devices.
- It's not possible, however, to extend things past the maximum size of the available ephemeral block devices.
The ability to attach and detach any given block device to and from any given EC2 instance.
- However, mounting block devices over NFS on multiple nodes does provide some of the benefits of this announced feature.
0 件のコメント:
コメントを投稿