scispace - formally typeset
Search or ask a question
Journal ArticleDOI

The Future of Magnetic Data Storage Technology

David A. Thompson1, J. S. Best1
17 Oct 2000-ChemInform (WILEY‐VCH Verlag)-Vol. 31, Iss: 42
TL;DR: In this paper, the evolutionary path of magnetic data storage and examine the physical phenomena that will prevent us from continuing the use of those scaling processes which have served us in the past are reviewed.
Abstract: In this paper, we review the evolutionary path of magnetic data storage and examine the physical phenomena that will prevent us from continuing the use of those scaling processes which have served us in the past. It is concluded that the first problem will arise from the storage medium, whose grain size cannot be scaled much below a diameter of ten nanometers without thermal self-erasure. Other problems will involve head-to-disk spacings that approach atomic dimensions, and switching-speed limitations in the head and medium. It is likely that the rate of progress in areal density will decrease substantially as we develop drives with ten to a hundred times current areal densities. Beyond that, the future of magnetic storage technology is unclear. However, there are no alternative technologies which show promise for replacing hard disk storage in the next ten years.
Citations
More filters
Posted Content
TL;DR: Ateniese et al. as discussed by the authors introduced the provable data possession (PDP) model, which allows a client that has stored data at an untrusted server to verify that the server possesses the original data without retrieving it.
Abstract: We introduce a model for provable data possession (PDP) that allows a client that has stored data at an untrusted server to verify that the server possesses the original data without retrieving it. The model generates probabilistic proofs of possession by sampling random sets of blocks from the server, which drastically reduces I/O costs. The client maintains a constant amount of metadata to verify the proof. The challenge/response protocol transmits a small, constant amount of data, which minimizes network communication. Thus, the PDP model for remote data checking supports large data sets in widely-distributed storage systems. We present two provably-secure PDP schemes that are more efficient than previous solutions, even when compared with schemes that achieve weaker guarantees. In particular, the overhead at the server is low (or even constant), as opposed to linear in the size of the data. Experiments using our implementation verify the practicality of PDP and reveal that the performance of PDP is bounded by disk I/O and not by cryptographic computation.

2,127 citations

Journal ArticleDOI
29 Sep 2005-Nature
TL;DR: This work presents an autonomous ordering and assembly of atoms and molecules on atomically well-defined surfaces that combines ease of fabrication with exquisite control over the shape, composition and mesoscale organization of the surface structures formed.
Abstract: The fabrication methods of the microelectronics industry have been refined to produce ever smaller devices, but will soon reach their fundamental limits. A promising alternative route to even smaller functional systems with nanometre dimensions is the autonomous ordering and assembly of atoms and molecules on atomically well-defined surfaces. This approach combines ease of fabrication with exquisite control over the shape, composition and mesoscale organization of the surface structures formed. Once the mechanisms controlling the self-ordering phenomena are fully understood, the self-assembly and growth processes can be steered to create a wide range of surface nanostructures from metallic, semiconducting and molecular materials.

2,013 citations

Proceedings Article
28 Jan 2002
TL;DR: The feasibility of the write-once model for storage is demonstrated using data from over a decade's use of two Plan 9 file systems, resulting in an access time for archival data that is comparable to non-archival data.
Abstract: This paper describes a network storage system, called Venti, intended for archival data In this system, a unique hash of a block's contents acts as the block identifier for read and write operations This approach enforces a write-once policy, preventing accidental or malicious destruction of data In addition, duplicate copies of a block can be coalesced, reducing the consumption of storage and simplifying the implementation of clients Venti is a building block for constructing a variety of storage applications such as logical backup, physical backup, and snapshot file systems We have built a prototype of the system and present some preliminary performance results The system uses magnetic disks as the storage technology, resulting in an access time for archival data that is comparable to non-archival data The feasibility of the write-once model for storage is demonstrated using data from over a decade's use of two Plan 9 file systems

956 citations


Cites background from "The Future of Magnetic Data Storage..."

  • ...The last decade, however, has seen the capacity of magnetic disks increase at a far faster rate than optical technologies [20]....

    [...]

Journal ArticleDOI
TL;DR: In this paper, a new scanning-probe-based data-storage concept called the "millipede" is presented, which combines ultrahigh density, terabit capacity, small form factor, and high data rate.
Abstract: Present a new scanning-probe-based data-storage concept called the "millipede" that combines ultrahigh density, terabit capacity, small form factor, and high data rate. Ultrahigh storage density has been demonstrated by a new thermomechanical local-probe technique to store, read back, and erase data in very thin polymer films. With this new technique, nanometer-sized bit indentations and pitch sizes have been made by a single cantilever/tip into thin polymer layers, resulting in a data storage densities of up to 1 Tb/in/sup 2/. High data rates are achieved by parallel operation of large two-dimensional (2-D) atomic force microscope (AFM) arrays that have been batch-fabricated by silicon surface-micromachining techniques. The very large-scale integration (VLSI) of micro/nanomechanical devices (cantilevers/tips) on a single chip leads to the largest and densest 2-D array of 32/spl times/32 (1024) AFM cantilevers with integrated write/read/erase storage functionality ever built. Time-multiplexed electronics control the functional storage cycles for parallel operation of the millipede array chip. Initial areal densities of 100-200 Gb/in/sup 2/ have been achieved with the 32/spl times/32 array chip.

800 citations


Cites methods from "The Future of Magnetic Data Storage..."

  • ...The objectives of our research activities within the Microand Nanomechanics project at the IBM Zurich Research Laboratory are to explore highly parallel AFM data storage with areal storage densities far beyond the expected superparamagnetic limit ( 100 Gb/in ) [7] and data rates comparable to those…...

    [...]

Journal ArticleDOI
TL;DR: In this article, a bit-patterned media (BPM) fabrication method was proposed for magnetic data recording at > 1 Tb/in 2 and circumvents many of the challenges associated with extending conventional granular media technology.
Abstract: Bit-patterned media (BPM) for magnetic recording provides a route to thermally stable data recording at >1 Tb/in 2 and circumvents many of the challenges associated with extending conventional granular media technology. Instead of recording a bit on an ensemble of random grains, BPM comprises a well-ordered array of lithographically patterned isolated magnetic islands, each of which stores 1 bit. Fabrication of BPM is viewed as the greatest challenge for its commercialization. In this paper, we describe a BPM fabrication method that combines rotary-stage e-beam lithography, directed self-assembly of block copolymers, self-aligned double patterning, nanoimprint lithography, and ion milling to generate BPM based on CoCrPt alloy materials at densities up to 1.6 Td/in 2 . This combination of novel fabrication technologies achieves feature sizes of 2 (roughly equivalent to 1.3 Tb/in 2 ) demonstrate a raw error rate -2 , which is consistent with the recording system requirements of modern hard drives. Extendibility of BPM to higher densities and its eventual combination with energy-assisted recording are explored.

461 citations

References
More filters
Journal ArticleDOI
David A. Thompson1, J. S. Best1
TL;DR: The evolutionary path of magnetic data storage is reviewed and the physical phenomena that will prevent the use of those scaling processes which have served us in the past are examined, finding that the first problem will arise from the storage medium, whose grain size cannot be scaled much below a diameter of ten nanometers without thermal self-erasure.
Abstract: In this paper, we review the evolutionary path of magnetic data storage and examine the physical phenomena that will prevent us from continuing the use of those scaling processes which have served us in the past. It is concluded that the first problem will arise from the storage medium, whose grain size cannot be scaled much below a diameter of ten nanometers without thermal self-erasure. Other problems will involve head-to-disk spacings that approach atomic dimensions, and switching-speed limitations in the head and medium. It is likely that the rate of progress in areal density will decrease substantially as we develop drives with ten to a hundred times current areal densities. Beyond that, the future of magnetic storage technology is unclear. However, there are no alternative technologies which show promise for replacing hard disk storage in the next ten years.

303 citations