[2] Each object is typically associated with a variable amount of metadata, and a globally unique identifier.
In each case, object storage seeks to enable capabilities not addressed by other storage architectures, like interfaces that are directly programmable by the application, a namespace that can span multiple instances of physical hardware, and data-management functions like data replication and data distribution at object-level granularity.
[4] One of the limitations with object storage is that it is not intended for transactional data, as object storage was not designed to replace NAS file access and sharing; it does not support the locking and sharing mechanisms needed to maintain a single, accurately updated version of a file.
According to Starkey, this backronym arose when Terry McKiever, working in marketing at Apollo Computer felt that the term needed to be an abbreviation.
Object storage was proposed at Gibson's Carnegie Mellon University lab as a research project in 1996.
Fine grained access control through object storage architecture[8] was further described by one of the NASD team, Howard Gobioff, who later was one of the inventors of the Google File System.
[9] Other related work includes the Coda filesystem project at Carnegie Mellon, which started in 1987, and spawned the Lustre file system.
The presentation revealed an IP Agreement that had been signed in February 1997 between the original collaborators (with Seagate represented by Anderson and Chris Malakapalli) and covered the benefits of object storage, scalable computing, platform independence, and storage management.
Object storage adds a unique identifier within a bucket, or across the entire system, to support much larger namespaces and eliminate name collisions.
Object storage explicitly separates file metadata from data to support additional capabilities.
Some notable examples are Amazon Web Services S3, which debuted in March 2006, Microsoft Azure Blob Storage, Rackspace Cloud Files (whose code was donated in 2010 to Openstack project and released as OpenStack Swift), and Google Cloud Storage released in May 2010.
Some early incarnations of object storage were used for archiving, as implementations were optimized for data services like immutability, not performance.
EMC Centera and Hitachi HCP (formerly known as HCAP) are two commonly cited object storage products for archiving.
Some large Internet companies developed their own software when object-storage products were not commercially available or use cases were very specific.
Facebook famously invented their own object-storage software, code-named Haystack, to address their particular massive-scale photo management needs efficiently.
[24] Object-storage systems had good adoption in the early 2000s as an archive platform, particularly in the wake of compliance laws like Sarbanes-Oxley.
[28] On July 1, 2014, Los Alamos National Lab chose the Scality RING as the basis for a 500-petabyte storage environment, which would be among the largest ever.
[31] Cloud storage has become pervasive as many new web and mobile applications choose it as a common way to store binary data.
[32] As the storage back-end to many popular applications like Smugmug and Dropbox, Amazon S3 has grown to massive scale, citing over 2-trillion objects stored in April 2013.
IDC MarketScape assessments are particularly helpful in emerging markets that are often fragmented, have several players, and lack clear leaders.
"[37] In 2019, IDC rated Dell EMC, Hitachi Data Systems, IBM, NetApp, and Scality as leaders.
This ability reduces the number of times a high-level storage system has to cross the interface to the OSD, which can improve overall efficiency.
A second generation of the SCSI command set, "Object-Based Storage Devices - 2" (OSD-2) added support for snapshots, collections of objects, and improved error handling.
First, object stores also allow one to associate a limited set of attributes (metadata) with each piece of data.