Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The limiting factor of tape is in access times and bandwidth relative to size. If the data is written and read too slowly, and your seek times go too high, you start to lose your ability to usefully test, verify and recover the archive, and looking at that chart in the article, the time to fill has exploded recently, from under two hours in the 2000s to over nine now. Can you guarantee operation for nine hours without interruption or power loss?

Of course, that doesn't make huge tape storage useless, just less useful as a format for accessing a single giant archive. One answer would have it that not using the whole capacity is OK, and the goal should be to archive a smaller dataset more often. But it's definitely something that demands real engineering work, while a lot of organizations can barely manage to plan for any backup.



Tape works well where storage is partitioned over many units of storage media, access is infrequent relative to total storage, media are well indexed (such that identification, retrieval, and access are trivial), storage is redundant (2+ copies of any given data), and there are multiple read & write heads allowing for simultaneous access.

Tape is a very large tank accessed through a very thin (and expensive) straw. But storage quantities offered are immense and reliable.

Addressing your concerns, increase the available heads if access is constrained. Tape libraries provide for this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: