13.07.2015 Views

Best Practices for Archiving Removable Hard Disk Drives - Imation

Best Practices for Archiving Removable Hard Disk Drives - Imation

Best Practices for Archiving Removable Hard Disk Drives - Imation

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

In general, disk drive failures are non-random. [3,4,5]. Examples of non-random failureinclude early life failure [6], manufacturing batch dependency [3], and defective hard drivefirmware [7]. This reduces the accuracy and usefulness of conventional statistical predictiontools <strong>for</strong> life expectancy. Traditional reliability specifications such as mean time betweenfailure (MTBF) do not present a meaningful metric of real-world observations [8].In addition to self-erasure from bit rot, data can be recorded incorrectly or corrupted due tosurface asperities, corrosion, and vibrations during adjacent track writing [8]. Un<strong>for</strong>tunately,these errors generally remain undetected without warning until an attempt to accessdata fails. Error-correction algorithms can help disk drives recover partially corrupteddata—but with limits. Modern SATA disk drives are specified to contain no more than oneuncorrectable error per 1014 bits, or 12.5 terabytes.ATTRIBUTES OF HARD DISK DRIVES;IMPLICATIONS FOR ARCHIVAL USEBased on the available research of operational hard disk drive longevity and reliability,we put <strong>for</strong>ward the following attributes of data-at-rest on hard disk drives:1) The longer data is at rest, the more likely it is to incur errors.2) Storage media can fail at any point in time without warning.3) Data stored on any medium has no guarantee against defects.An effective archiving strategy must protect against all three of these attributes.RECOMMENDATIONS FOR ARCHIVING DATA ONREMOVABLE HARD DISK DRIVESA common practice in archiving is to maintain multiple replicas of data [9]. This protectsagainst both the likelihood of data corruption and drive failure. More copies mean betterprotection. To withstand catastrophic events, replicas must be stored in multiple locations,which may include cloud-based repositories.Powering up a hard disk drive and per<strong>for</strong>ming a nominal file access procedure every 18 to24 months will redistribute mechanical lubricants. This will help ensure the reliable operationof hard disk drives that store archival data. <strong>Hard</strong> disk drives generate standard drive healthrecords called SMART [8,10]. Users can review SMART data during this exercise, comparingnew and previous records to check <strong>for</strong> consistency.Archived data can become less accessible when hardware and applications age [9], bymigrating data to modern media keeps concurrent with latest operating systems. Freshlywritten data on new media will also reset the thermal decay clock and mitigates the effectof bit rot. As hard disk drives continue to add capacity, fewer total drives will be needed <strong>for</strong>the migration of multiple archive volumes. We recommend a migration schedule of five toseven years.Reliability projections <strong>for</strong> a 10-year design life assume the thermal environment of an activehard disk; drives also have breather vents to help equalize atmospheric pressure, allowingsome air exchange. Maintaining a range of 60–69°F (16–20°C) and 35–45% relative humiditywill reduce the likelihood and rate of bit rot, and potential corrosion inside the drives [11].2


SUMMARYAn effective archival protection strategy must address these attributes and include:• Multiple replicas of archived data; more replicas will mean greater protection.• Periodic startup and exercise of mostly non-operational hard disk drives toredistribute mechanical lubricants.• Periodic migration of data to modern media and systems to prevent bit rotand system obsolescence.• Storage of archival media in controlled temperature and humidity environmentsto minimize bit rot and disk corrosion.Extensive research of operational hard disk drives has shown:• Written bits are designed to be magnetically stable <strong>for</strong> approximately 10 years.• HDD failure statistics do not follow random distributions.• Data loss can occur without catastrophic failures.From these findings, we infer the following attributes of data-at-rest in mostly nonoperationalhard disk drives:• The longer data has been at rest, the more likely it will incur errors.• Media can fail at any point in time without warning.• Data stored on any single piece of media cannot be assumed to be defect free.ABOUT IMATION<strong>Imation</strong> is a global technology corporation with a history of bringing innovative datamanagement solutions to businesses. Its removable hard disk drive media, better knownas <strong>Imation</strong> RDX, is the scalable storage technology that powers a portfolio of storage andarchiving solutions specifically designed to meet the needs of small businesses and midsizedenterprises. More in<strong>for</strong>mation is available at www.imation.com/rdx.3


REFERENCES[1] Evans, R.F., et al, Thermally induced error: density limit <strong>for</strong> magnetic data storage,Applied Physics Letters, vol. 100, issue 10, 102402 (2012).[2] Jiang, W., et al, Investigations on adjacent-track interference in perpendicular recordingsystems, Joint NAPMRC 2003. Digest of Technical Papers [Perpendicular MagneticRecording Conference 2003].[3] Schroeder, B., Gibson, G., <strong>Disk</strong> failures in the real world: What does an MTTF of1,000,000 hours mean to you?, Proceedings of 5th USENIX Conference on File and StorageTechnologies, (2007).[4] Pinheiro, E., Weber, W.D., and Barroso, L.A., Failure trends in a large disk drive population,Proceedings of the Fifth Usenix Conference on File and Storage Technologies (FAST),(February 2007).[5] Sun, F. and Zhang, S., Does hard-disk drive failure rate enter steady-state after one year?,Proceedings of the Annual Reliability and Maintainability Symposium, IEEE, (January 2007).[6] Shah, S. and Elerath, J.G., <strong>Disk</strong> drive vintage and its effect on reliability, Proceedings ofthe Annual Symposium on Reliability and Maintainability, pages 163 – 167, (January 2004).[7] http://www.channelregister.co.uk/2009/01/21/seagate_firmware_fix_breaks_barracudas/[8] Elerath, J. G., <strong>Hard</strong>-disk drives: The good, the bad, and the ugly. Comm. ACM vol.52 no.6,(2009).[9] Rosenthal, D.S., Bit preservation: a problem solved?, The International Journal of DigitalCuration, issue 1, vol.5, (2010).[10] Schwarz, T., et al, <strong>Disk</strong> failure investigations at the Internet Archive, Work-in-ProgressSession, NASA/IEEE Conference on Mass Storage Systems and Technologies, (2006).[11] http://www.mnhs.org/preserve/records/electronicrecords/erstorage.htmlwww.imation.com© <strong>Imation</strong> Corp. <strong>Imation</strong> Enterprises Corp, 1 <strong>Imation</strong> Way Oakdale, MN 55128.3414 Ph: 651.704.4000 Fax: 651.537.4675 www.imation.com<strong>Imation</strong> and the <strong>Imation</strong> logo are trademarks of <strong>Imation</strong> Corp and its affiliates. All other trademarks are the property of their respective owners.52-0003-9468-5 (07.09.12)4

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!