Hierarchical storage management

Hierarchical storage management (HSM) is a data storage technique, which automatically moves data between high-cost and low-cost storage media. HSM systems exist because high-speed storage devices, such as hard disk drive arrays, are more expensive (per byte stored) than slower devices, such as optical discs and magnetic tape drives. While it would be ideal to have all data available on high-speed devices all the time, this is prohibitively expensive for many organizations. Instead, HSM systems store the bulk of the enterprise's data on slower devices, and then copy data to faster disk drives when needed. In effect, HSM turns the fast disk drives into caches for the slower mass storage devices. The HSM system monitors the way data is used and makes best guesses as to which data can safely be moved to slower devices and which data should stay on the fast devices.

HSM may also be used where more robust storage is available for long-term archiving, but this is slow to access. This may be as simple as an off-site backup, for protection against a building fire.

HSM is a long-established concept, dating back to the beginnings of commercial data processing. The techniques used though have changed significantly as new technology becomes available, for both storage and for long-distance communication of large data sets. The scale of measures such as 'size' and 'access time' have changed dramatically. Despite this, many of the underlying concepts keep returning to favour years later, although at much larger or faster scales.[1]

Implementation

In a typical HSM scenario,[lower-roman 1] data files which are frequently used are stored on disk drives, but are eventually migrated to tape if they are not used for a certain period of time, typically a few months. If a user does reuse a file which is on tape, it is automatically moved back to disk storage. The advantage is that the total amount of stored data can be much larger than the capacity of the disk storage available, but since only rarely used files are on tape, most users will usually not notice any slowdown.

HSM is sometimes referred to as tiered storage.[1]

HSM (originally DFHSM, now DFSMShsm) was first implemented by IBM on their mainframe computers to reduce the cost of data storage, and to simplify the retrieval of data from slower media. The user would not need to know where the data was stored and how to get it back; the computer would retrieve the data automatically. The only difference to the user was the speed at which data was returned.

HSM in the shape of the IBM 3850 Mass Storage Facility was (according to IBM) announced in 1974.

Later, IBM ported HSM to its AIX operating system, and then to other Unix-like operating systems such as Solaris, HP-UX and Linux.

HSM was also implemented on the DEC VAX/VMS systems and the Alpha/VMS systems. The first implementation date should be readily determined from the VMS System Implementation Manuals or the VMS Product Description Brochures.

Recently, the development of Serial ATA (SATA) disks has created a significant market for three-stage HSM: files are migrated from high-performance Fibre Channel storage area network devices to somewhat slower but much cheaper SATA disk arrays totaling several terabytes or more, and then eventually from the SATA disks to tape.

The newest development in HSM is with hard disk drives and flash memory, with flash memory being over 30 times faster than disks, but disks being considerably cheaper.

Conceptually, HSM is analogous to the cache found in most computer CPUs, where small amounts of expensive SRAM memory running at very high speeds is used to store frequently used data, but the least recently used data is evicted to the slower but much larger main DRAM memory when new data has to be loaded.

In practice, HSM is typically performed by dedicated software, such as IBM Tivoli Storage Manager, Oracle's SAM-QFS, Versity's VSM, Quantum, Novell's Dynamic Storage Technology (DST) on Open Enterprise Server (OES) Linux Platform, SGI Data Migration Facility (DMF), StorNext, or EMC Legato OTG DiskXtender.

The deletion of files from a higher level of the hierarchy (e.g. magnetic disk) after they have been moved to a lower level (e.g. optical media) is sometimes called file grooming.[2]

Use cases

HSM is often used for deep archival storage of data to be held long term at low cost. Automated tape robots can silo large quantities of data efficiently with low power consumption.

Some HSM software products allow the user to place portions of data files on high-speed disk cache and the rest on tape. This is used in applications that stream video over the internet -- the initial portion of a video is delivered immediately from disk while a robot finds, mounts and streams the rest of the file to the end user. Such a system greatly reduces disk cost for large content provision systems.

Tiered storage

Tiered storage is a data storage environment consisting of two or more kinds of storage delineated by differences in at least one of these four attributes: price, performance, capacity and function.[1]

Any significant difference in one or more of the four defining attributes can be sufficient to justify a separate storage tier.

Examples:

Note: Storage Tiers are not delineated by differences in vendor, architecture, or geometry except where those differences result in clear changes to price, performance, capacity and function.

Implementations

See also

References

  1. An example from around 2000, which even now is looking dated as tape falls from favour.
  1. 1 2 3 Larry Freeman. "What's Old Is New Again - Storage Tiering" (PDF).
  2. Patrick M. Dillon; David C. Leonard (1998). Multimedia and the Web from A to Z. ABC-CLIO. p. 116. ISBN 978-1-57356-132-7.
  3. QStar Network Migrator Product Page
  4. Rand Morimoto; Michael Noel; Omar Droubi; Ross Mistry; Chris Amaris (2008). Windows Server 2008 Unleashed. Sams Publishing. p. 938. ISBN 978-0-13-271563-8.
  5. http://windowsitpro.com/storage/remote-storage-service
This article is issued from Wikipedia - version of the Tuesday, February 02, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.