Object storage
Object Storage (also known as object-based storage[1]) is a storage architecture that manages data as objects, as opposed to other storage architectures like file systems which manage data as a file hierarchy and block storage which manages data as blocks within sectors and tracks.[2] Each object typically includes the data itself, a variable amount of metadata, and a globally unique identifier. Object storage can be implemented at multiple levels, including the device level (object storage device), the system level, and the interface level. In each case, object storage seeks to enable capabilities not addressed by other storage architectures, like interfaces that can be directly programmable by the application, a namespace that can span multiple instances of physical hardware, and data management functions like data replication and data distribution at object-level granularity.
Object storage systems allow relatively inexpensive, scalable and self-healing retention of massive amounts of unstructured data. Object storage is used for diverse purposes such as storing photos on Facebook, songs on Spotify, or files in online collaboration services, such as Dropbox.[3]
History
Origins
Object storage was first proposed at Carnegie Mellon University's Parallel Data Lab as a research project in 1996.[4] Research by Garth Gibson, et al. on Network Attached Secure Disks first promoted the concept of splitting less common operations, like namespace manipulations, from common operations, like reads and writes, to optimize the performance and scale of both.[5] Another key concept was abstracting the writes and reads of data to more flexible data containers (objects). Fine grained access control through object storage architecture[6] was further described by one of the NASD team, Howard Gobioff, who later was one of the inventors of the Google File System.[7] Other related work includes the Coda filesystem project at Carnegie Mellon, which started in 1987, and spawned the Lustre file system.[8] There is also the OceanStore project at UC Berkeley,[9] which started in 1999.[10] One of the earliest and best-known object storage products, EMC's Centera, debuted in 2002.[11] However, development of Centera's technology started even earlier, at a company called Filepool (which was acquired by EMC in 1999).
Development
Overall industry investment in object storage technology has been sustained for over a decade. From 1999 to 2013, there has been at least $300 million of venture financing related to object storage, including vendors like Amplidata, Bycast, Caringo, Cleversafe, Nirvanix, and Scality.[12] This doesn't include millions of dollars of private engineering from systems vendors like DataDirect Networks (WOS), EMC (Centera, Atmos, ViPR), HDS (HCP), HP (HP OpenStack), IBM, NetApp (StorageGRID), Redhat GlusterFS and Keeper Technology (keeperSAFE), cloud services vendors like Amazon (AWS S3), Microsoft (Microsoft Azure) and Google (Google Cloud Storage), or the many man years of open source development at Lustre, OpenStack (Swift), MogileFS, Ceph and Skylable SX.
Architecture
Abstraction of storage
One of the design principles of object storage is to abstract some of the lower layers of storage away from the administrators and applications. Thus, data is exposed and managed as objects instead of files or blocks. Objects contain additional descriptive properties which can be used for better indexing or management. Administrators do not have to perform lower level storage functions like constructing and managing logical volumes to utilize disk capacity or setting RAID levels to deal with disk failure.
Object storage also allows the addressing and identification of individual objects by more than just file name and file path. Object storage adds a unique identifier within a bucket, or across the entire system, to support much larger namespaces and eliminate name collisions.
Separation of metadata and data
Object storage explicitly separates file metadata from data to support additional capabilities:
- Additional metadata to capture application-specific or user-specific information for better indexing purposes
- Additional metadata to support data management policies (e.g. a policy to drive object movement from one storage tier to another)
- Independent scale of metadata nodes and data nodes
- Unified access to data across many distributed nodes and clusters
- Centralized management of storage across many individual nodes and clusters
- Optimization of metadata storage (e.g. database or key value storage) vs data storage (e.g. unstructured binary storage)
Additionally, in object-based file systems:
- The file system clients only contact metadata servers once when the file is opened and then get content directly via object storage servers (vs. block-based file systems which would require constant metadata access)
- Data objects can be configured on a per-file basis to allow adaptive stripe width, even across multiple object storage servers, supporting optimizations in bandwidth and I/O
Object-based storage devices (OSD) manage metadata and data at the storage device level:
- Instead of providing a block-oriented interface that reads and writes fixed sized blocks of data, an OSD organizes data into flexible-sized data containers, called objects
- Each object has both data (an uninterpreted sequence of bytes) and metadata (an extensible set of attributes describing the object)
- The command interface to the OSD includes commands to create and delete objects, write bytes and read bytes to and from individual objects, and to set and get attributes on objects
- The OSD implements a security mechanism that provides per-object and per-command access control
Programmatic data management
Object storage provides programmatic interfaces to allow applications to manipulate data. At the base level, this includes CRUD functions for basic read, write and delete operations. Some object storage implementations go further, supporting additional functionality like object versioning, object replication, and movement of objects between different tiers and types of storage. Most API implementations are ReST-based, allowing the use of many standard HTTP calls.
Implementation
Object-based storage devices
Object storage at the protocol and device layer was proposed 20 years ago and approved for the SCSI command set nearly 10 years ago as "Object-based Storage Device Commands" (OSD),[13] but has not been productized until the development of the Seagate Kinetic Open Storage platform.[14][15] The SCSI command set for Object Storage Devices was developed by a working group of the Storage Networking Industry Association (SNIA) for the T10 committee of the International Committee for Information Technology Standards (INCITS).[16] T10 is responsible for all SCSI standards.
Object-based file systems
Some high-performance distributed file systems use an object-based architecture, where file metadata is stored in metadata servers and file data is stored in object storage servers. File system client software interacts with the distinct servers, and abstracts them to present a full file system to users and applications. Lustre is an example of this type of object storage.
Archive storage
Some early incarnations of object storage were used for archiving, as implementations were optimized for data services like immutability, not performance. EMC Centera and Hitachi HCP (formerly known as HCAP) are two commonly cited object storage products for archiving. Another example is Quantum Lattus Object Storage Platform.
Cloud storage
The vast majority of cloud storage available in the market leverages an object storage architecture. Two notable examples of cloud storage services are Amazon Web Services S3 and Rackspace Files. AWS S3 debuted in 2005 and has since been synonymous with cloud storage services. Other major cloud storage services include Microsoft Azure and Google Cloud Storage.
"Captive" object storage
Some large internet companies developed their own software when object storage products were not commercially available or use cases were very specific. Facebook famously invented their own object storage software, code-named Haystack, to address their particular massive scale photo management needs efficiently.[17]
Hybrid storage
A few object storage systems, such as Ceph, GlusterFS, and Scality RING support Unified File and Object (UFO) storage, allowing some clients to store objects on a storage system while simultaneously other clients store files on the same storage system. While "hybrid storage" is not a widely accepted term for this concept, interoperable interfaces to the same set of data is becoming available in some object storage products.
Object storage systems
More general purpose object storage systems came to market around 2008. Lured by the incredible growth of "captive" storage systems within web applications like Yahoo Mail and the early success of cloud storage, object storage systems promised the scale and capabilities of cloud storage, with the ability to deploy the system within an enterprise, or at an aspiring cloud storage service provider. Notable examples of object storage systems include Caringo Swarm, EMC Atmos, Hitachi HCP, OpenStack Swift, and Scality RING.
Market adoption
One of the first object storage products, Lustre, is used in 70% of the Top 100 supercomputers and ~50% of the Top 500.[18] As of June 16, 2013, this includes 7 of the top 10, including the current fastest system on the list - China's Tianhe-2 and the second fastest, the Titan supercomputer at Oak Ridge National Laboratory (pictured on the right).[19]
Object storage systems had good adoption in the early 2000s as an archive platform, particularly in the wake of compliance laws like Sarbanes-Oxley. After five years in the market, EMC's Centera product claimed over 3,500 customers and 150 petabytes shipped by 2007.[20] Hitachi's HCP product also claims many petabyte-scale customers.[21] Newer object storage systems have also gotten some traction, particularly around very large custom applications like eBay's auction site, where EMC Atmos is used to manage over 500 million objects a day.[22] As of March 3, 2014, EMC claims to have sold over 1.5 exabytes of Atmos storage.[23] On July 1, 2014, Los Alamos National Lab chose the Scality RING as the basis for a 500 petabyte storage environment, which would be among the largest ever.[24]
"Captive" object storage systems like Facebook's Haystack have scaled impressively. In April 2009, Haystack was managing 60 billion photos and 1.5 petabytes of storage, adding 220 million photos and 25 terabytes a week.[17] Facebook more recently stated that they were adding 350 million photos a day and were storing 240 billion photos.[25] This could equal as much as 357 petabytes.[26]
Cloud storage has become pervasive as many new web and mobile applications choose it as a common way to store binary data.[27] As the storage backend to many popular applications like Smugmug and Dropbox, AWS S3 has grown to massive scale, citing over 2 trillion objects stored in April 2013.[28] Two months later, Microsoft claimed that they stored even more objects in Azure at 8.5 trillion.[29] By April 2014, Azure claimed over 20 trillion objects stored.[30] Windows Azure Storage manages Blobs (user files), Tables (structured storage), and Queues (message delivery) and counts them all as objects.[31]
Market analysis
IDC has begun to assess the object-based storage market annually using its MarketScape methodology. IDC describes the MarketScape as: "...a quantitative and qualitative assessment of the characteristics that assess a vendor's current and future success in the said market or market segment and provide a measure of their ascendancy to become a Leader or maintain a leadership. IDC MarketScape assessments are particularly helpful in emerging markets that are often fragmented, have several players, and lack clear leaders."[32]
In 2013, IDC rated Cleversafe, Scality, DataDirect Networks, Amplidata, and EMC as leaders.[33] In 2014, it rated Scality, Cleversafe, DataDirect Networks, Hitachi Data Systems, Amplidata, EMC, and Cloudian as leaders.[34]
Standards
Object-based storage device standards
OSD Version 1
In the first version of the OSD standard,[35] objects are specified with a 64-bit partition ID and a 64-bit object ID. Partitions are created and deleted within an OSD, and objects are created and deleted within partitions. There are no fixed sizes associated with partitions or objects; they are allowed to grow subject to physical size limitations of the device or logical quota constraints on a partition.
An extensible set of attributes describe objects. Some attributes are implemented directly by the OSD, such as the number of bytes in an object and the modify time of an object. There is a special policy tag attribute that is part of the security mechanism. Other attributes are uninterpreted by the OSD. These are set on objects by the higher-level storage systems that use the OSD for persistent storage. For example, attributes might be used to classify objects, or to capture relationships among different objects stored on different OSDs.
A list command returns a list of identifiers for objects within a partition, optionally filtered by matches against their attribute values. A list command can also return selected attributes of the listed objects.
Read and write commands can be combined, or piggy-backed, with commands to get and set attributes. This ability reduces the number of times a high-level storage system has to cross the interface to the OSD, which can improve overall efficiency.
OSD Version 2
A second generation of the SCSI command set, "Object-Based Storage Devices - 2" (OSD-2) added support for snapshots, collections of objects, and improved error handling.[36]
A snapshot is a point in time copy of all the objects in a partition into a new partition. The OSD can implement a space-efficient copy using copy-on-write techniques so that the two partitions share objects that are unchanged between the snapshots, or the OSD might physically copy the data to the new partition. The standard defines clones, which are writeable, and snapshots, which are read-only.
A collection is a special kind of object that contains the identifiers of other objects. There are operations to add and delete from collections, and there are operations to get or set attributes for all the objects in a collection. Collections are also used for error reporting. If an object becomes damaged by the occurrence of a media defect (i.e., a bad spot on the disk) or by a software error within the OSD implementation, its identifier is put into a special error collection. The higher-level storage system that uses the OSD can query this collection and take corrective action as necessary.
Differences between Key-Value and Object Stores
Let’s first clarify what a key/value store and an object store are. Using the traditional block storage interface, one has a series of fixed size blocks which are numbered starting at 0. Data must be that exact fixed size and can be stored in a particular block which is identified by its logical block number (LBN). Later, one can retrieve that block of data by specifying its unique LBN.
With a key/value store, data is identified by a key rather than a LBN. A key might be cat or olive or 42. It can be an arbitrary sequence of bytes of arbitrary length. Data (called a value in this parlance) does not need to be a fixed size and also can be an arbitrary sequence of bytes of arbitrary length. One stores data by presenting the key and data (value) to the data store and can later retrieve the data by presenting the key. You’ve seen this concept before in programming languages. Python calls them dictionaries, Perl calls them hashes, Java and C++ call them maps, etc. Several data stores also implement key/value stores such as Memcached, Redis and CouchDB.
Object stores are similar to key/value stores except that the key must be a positive integer like a LBN. However, unlike a LBN, the key can be any positive integer; it does not have to map to an existing logical block number. In practice, it is usually limited to 64 bits. More like a key/value store than the traditional block storage interface, data is not limited to a fixed size block but may be an arbitrary size. Object stores also allow one to associate a limited set of attributes with each piece of data. The key, value and set of attributes is referred to as an object. To add more confusion, sometimes key/value stores are loosely referred to as object stores but technically there is a difference.[37]
See also
References
- ↑ Mesnier, Mike; Gregory R. Ganger; Erik Riedel (August 2003). "Object-Based Storage". IEEE Communications Magazine: 84–90. Retrieved 27 October 2013.
- ↑ Porter De Leon, Yadin; Tony Piscopo. "Object Storage versus Block Storage: Understanding the Technology Differences". Druva.com. Retrieved 19 January 2015.
- ↑ Chandrasekaran, Arun, Dayley, Alan (11 February 2014). "Critical Capabilities for Object Storage". Gartner Research.
- ↑ Factor, Michael; Meth K. Naor D., Rodeh O., Satran J. "Object Storage: The Future Building Block for Storage Systems". IBM Haifa Research Labs. Retrieved 26 September 2013.
- ↑ Garth A. Gibson; Nagle D., Amiri K., Chan F., Feinberg E., Gobioff H., Lee C., Ozceri B., Riedel E., Rochberg D., Zelenka J. "File Server Scaling with Network-Attached Secure Disks". Proceedings of the ACM International Conference on Measurement and Modeling of Computer Systems (Sigmetrics ‘97). Retrieved 27 October 2013.
- ↑ Gobioff, Howard; Gibson, Garth A.; Tygar, Doug (1 October 1997). "Security for Network Attached Storage Devices (CMU-CS-97-185)". Parallel Data Laboratory. Retrieved 7 November 2013.
- ↑ Sanjay Ghemawat; Howard Gobioff, Shun-Tak Leung (October 2003). "The Google File System". Google. Retrieved 7 November 2013.
- ↑ Braam, Peter. "Lustre: The intergalactic file system". Retrieved 17 September 2013.
- ↑ "OceanStore". Retrieved 18 September 2013.
- ↑ Kubiatowicz, John; Kubiatowicz, John; Bindel D., Chen Y., Czerwinski S., Eaton P., Geels D., Gummadi R., Rhea S., Weatherspoon H., Weimer W., Wells C., and Zhao B. (November 2000). "OceanStore: An Architecture for Global-Scale Persistent Storage". Proceedings of the Ninth international Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS 2000). Retrieved 18 September 2013.
- ↑ "EMC Unveils Low-Cost Data-Storage Product". LA Times. April 30, 2002. Retrieved 17 September 2013.
- ↑ Leung, Leo (16 September 2013). "After 10 years, object storage investment continues and begins to bear significant fruit". Retrieved 17 September 2013.
- ↑ Riedel, Erik; Sami Iren (February 2007). "Object Storage and Applications". Retrieved 3 November 2013.
- ↑ "The Seagate Kinetic Open Storage Vision". Seagate. Retrieved 3 November 2013.
- ↑ Gallagher, Sean (27 October 2013). "Seagate introduces a new drive interface: Ethernet". Arstechnica.com. Retrieved 3 November 2013.
- ↑ Corbet, Jonathan (4 November 2008). "Linux and object storage devices". LWN.net. Retrieved 8 November 2013.
- ↑ 17.0 17.1 Vajgel, Peter. "Needle in a haystack: efficient storage of billions of photos". Retrieved 17 September 2013.
- ↑ Dilger, Andreas. "Lustre Future Development". IEEE MSST. Retrieved 27 October 2013.
- ↑ "Datadirect Networks to build world's fastest storage system for Titan, the world's most powerful supercomputer". Retrieved 27 October 2013.
- ↑ "EMC Marks Five Years of EMC Centera Innovation and Market Leadership". EMC. 18 April 2007. Retrieved 3 November 2013.
- ↑ "Hitachi Content Platform Supports Multiple Petabytes, Billions of Objects". Techvalidate.com. Retrieved 19 September 2013.
- ↑ Robb, Drew (11 May 2011). "EMC World Continues Focus on Big Data, Cloud and Flash". Infostor. Retrieved 19 September 2013.
- ↑ Hamilton, George. "In it for the Long Run: EMC's Object Storage Leadership". Retrieved 15 March 2014.
- ↑ Mellor, Chris (1 July 2014). "Los Alamos National Laboratory likes it, puts Scality's RING on it". The Register. Retrieved 26 January 2015.
- ↑ Miller, Rich (13 January 2013). "Facebook Builds Exabyte Data Centers for Cold Storage". Datacenterknowledge.com. Retrieved 6 November 2013.
- ↑ Leung, Leo (17 May 2014). "How much data does x store?". Techexpectations.org. Retrieved 23 May 2014.
- ↑ Leung, Leo (January 11, 2012). "Object storage already dominates our days (we just didn’t notice)". Retrieved 27 October 2013.
- ↑ Harris, Derrick (18 April 2013). "Amazon S3 goes exponential, now stores 2 trillion objects". Gigaom. Retrieved 17 September 2013.
- ↑ Wilhelm, Alex (27 June 2013). "Microsoft: Azure powers 299M Skype users, 50M Office Web Apps users, stores 8.5T objects". thenextweb.com. Retrieved 18 September 2013.
- ↑ Nelson, Fritz (4 April 2014). "Microsoft Azure's 44 New Enhancements, 20 Trillion Objects". Tom's IT Pro. Retrieved 3 September 2014.
- ↑ Calder, Brad. "Windows Azure Storage: A Highly Available Cloud Storage Service with Strong Consistency". 23rd ACM Symposium on Operating Systems Principles (SOSP): Microsoft. Retrieved 6 November 2013.
- ↑ Nadkarni, Ashish. "IDC MarketScape: Worldwide Object-Based Storage 2013 Vendor Assessment". http://www.idc.com''. IDC. Retrieved 26 January 2015.
- ↑ Mellor, Chris (27 November 2013). "IDC's explicit snapshot: Everyone who's anyone in object storage: In 3D". The Register. Retrieved 26 January 2015.
- ↑ Mellor, Chris (6 January 2015). "IDC: Who's HOT and who's NOT (in object storage) in 2014". The Register. Retrieved 26 January 2015.
- ↑ "INCITS 400-2004". InterNational Committee for Information Technology Standards. Retrieved 8 November 2013.
- ↑ "INCITS 458-2011". InterNational Committee for Information Technology Standards. 15 March 2011. Retrieved 8 November 2013.
- ↑ http://blog.gigaspaces.com/were-flash-keyvalue-and-object-stores-made-for-each-other-guest-post-by-johann-george-sandisk/