Talk:Hybrid drive

From Wikipedia, the free encyclopedia

This article does not address any of the possible flaws concerning the design of this technology. As it is an emerging thing, I don't expect all the details to be hammered out, but I see flaws on fundamental levels.

Modern operating systems and hardware organize memory hierarchally. This is usually done such that the smaller, faster teir of memory above a memory level contains a subset of the current tier. The reverse is also true; the contents of any level of the heirarchy is replicated in the level below it. If the cache below any level is too small to contain all of the data at a higher level, all accesses to it (that were not cached at the larger, higher level) will be misses if both schemes use temporal locality. (This can be mitigated somewhat by prefetching.) Thus, since operating systems support disk IO buffering using RAM and since most systems support large amounts of RAM, most if not all cache misses on the RAM will also miss the FLASH memory and require a magnetic disk access. Thus it does not provide as significant an advantage as it initially sounds; for the first four benefits, the disk should already be spun down.

Furthermore, FLASH technology is slower than magnetic disk technology in terms of write throughput speed. According to Samsung's website OneFLASH (the technology cited in the external article) is capable of ~10MB/s of sustained writing, whereas a typical 7200 rpm hard drive is capable of at least 40MB/s sustained. While reading is faster at 108MB/s, it has to exist in the FLASH before it can gain this benefit, which goes back to the previous problem; a disk access penalty is taken.

You're forgetting that the FLASH inside the drive doesn't have to be in a single bank but in a rack of sorts where eg. 5 FLASH drives read and write concurrently effectively multiplying the read and write speed. All it needs is multiple flash banks and seperate caches for them: Expensive, probably, reliable, probably not, possible, yes. - G3, 07:56, 10 November 2006 (UTC)
In addition, while a magnetic disk write is capable of 40MB/s sustained, it is much slower for random access reads. This is where FLASH is much faster, as it's random access seek times are essentially zero. The question is: are sequential or random reads more common? Depends on the task. Video editing -> sequential, most other things -> random.
No, this previous paragraph completely misses the point. If it's random access, it's either already in the RAM (which is *far*, *far* faster than any current Flash drive) and doesn't need to be read from the hard disk, and if it's not, then it's probably not on the Flash storage either. For sequential reads, caching in RAM will take care of the vast majority of reads, and the Flash part of the hard drive does not really aid much. The only case where it is indeed very useful is for machines with very limited amounts of RAM.
I agree. Think of it as a drive with a huge cache, write back enabled, and set to spin down very quickly. The main reason behind it is, I think, not so much the unique features of Flash, but the fact that disk drives still grow very quickly in capacity, and commensurable buffer RAM is expensive and needs a lot of power. I don't see the issue of write data rates - the Flash buffer would only have to sink the data until the disk has spun up, then it could act as an ordinary write buffer (when the RAM write back buffer overflows.) So, bottom line, it can't get any slower than current drives (as soon as the platter has spun up) because the data doesn't *have* to go into the Flash. As for read caching, a GB is probably more than your usual Windows based laptop uses for caching, so it should give a better hit rate. My machine has 2 GB and Windows uses "only" 600-odd MB for caching right now. Ralf-Peter 01:18, 6 December 2006 (UTC)

Regarding instant boot, if the flash is used as a cache, which the wiki article suggests, the OS would need to be preloaded to the flash at every shutdown, unless it had not yet been cleared. It simply delays the time, not remove it.

You could always keep the first X blocks in Flash. But yeah, all those system DLLs will probably get displaced sooner or later. Unless you have special commands... Send "LOCK EVERYTHING FROM NOW ON IN FLASH" from the BIOS, then turn it off once the boot phase is over. Ralf-Peter 01:18, 6 December 2006 (UTC)

tw 20:08, 15 June 2006 (UTC)

That's valid material, and sounds like you've done your research thoroughly. I suppose we could add in another section detailing those concerns, or possibly a pro/con comparison with regular disk drives. But I suppose we really won't know too much about things until the drives actually come out in a year or so.

Still on the subject of disadvantages: What would happen to the hardrive when the flash memory "worn off"? --Pinnecco 07:53, 4 August 2006 (UTC)

Whether intentionally or not, parts of this article read like a promotion, so I changed the language a little. Also, put things in present tense, as these drives have been developed, even though they haven't been marketed (I understand my reasoning may be a little shaky, feel free to disagree). Last, the "Advantages" section is now an ordered list. Sloverlord 18:45, 23 September 2006 (UTC)

I don't see how this article is controversial / not neutral. It's new technology, so perhaps some of the benefits will not be realized, but this can be fixed by changing statements about the present to the future. (For example, change "This offers benefits" to "This promises benefits", and adding a disclaimer at the top of the "Benefits" section that "Because this technology has yet to ship, it remains to be seen which of these benefits materialize." or something like that.) The primary benefits of flash for this application are fast random writes (for enterprise workloads), less power/heat/noise (for laptops and similar), and fast booting for consumers (hard drives limit boot time to spinup time -- see LinuxBIOS). It's also cheaper than RAM -- I could imagine copying movies to flash and playing with the drive spun-down. Given that random reads require knowing what will be read in advance (unlike writes), I don't see a big benefit. Also, it's premature to claim better reliability. - James, 26 November

Contents

[edit] DeFrag

The Arcticle does not talk about disk defragmentation. I could see that with this technology The hard drive could automatically defragment as you work. Is there any work in this area? Zginder 22:33, 3 November 2006 (UTC)

Defragmentation is a file system thing. Nothing the disk drive can do by itself. Unless you want to implement NTFS on the drive. Ralf-Peter 00:55, 6 December 2006 (UTC)

[edit] Flash Write/Rewrite limit?

Isnt there a limit to the number of times you can write and rewrite data to a flash drive? Pdinc 13:43, 10 November 2006 (UTC)

Yes there is, read this: http://ask-leo.com/can_a_usb_thumbdrive_wear_out.html There must exist some solution to this otherwise the hybrid drives will only last a few years (and gradually turn slower). --Sire404 08:38, 22 November 2006 (UTC)

Not completely sure, but I got this from the Flash Drive article - "Modern NAND-based flash drives often last for 500,000 or more erase/write cycles." Paddyman1989 12:50, 16 November 2006 (UTC)
Even if true, 500k is not as much as it seems for cache-like performance profiles, which is the touted benefit of the drives in the first place. I would be wary of buying such a drive until 3rd-party tests under typical (and non-typical) usage reveal their true lifespans. Eriol Ancalagon 05:44, 2 December 2006 (UTC)

[edit] Commercial article?

This article reads more like an ad than an encyclopedic article. Where are the possible disadvantages of this type of drive (including data corruption on power loss, rather limited write/erase cycles, higher cost)? I would also like to see few words written about compatibility with different OSes and older systems. I NPOVed this because of the lack of critique or, indeed, neutrality. - G3, 07:45, 10 November 2006 (UTC)

[edit] Flash Memory

We could simply provide a link to the flash memory wiki so people can see all the limitations of the technology.

[edit] Disadvantages

While this article discusses many of the "advantages" of the usage of flash media as an intermediary, no attention has been paid to the service life of the media itself. I wouldn't want a new harddrive that i would have to perform an overhaul on every year. The longevity of flash memory is only about 1,000,000 writes per block on the best media, and it's far less on others. Where hard drives are not intended to be thrown out like a common thumb drive every few years, the use of flash media, even with a wear levelling (-leveling) component, may significantly reduce the useful life of these hardrives. I think the advantages section needs a counterpoint of disdvantages...

This still reads as though it is a promotion of the technology rather than a factual accounting of it.


What about the number of start/stop cycles? Yeah I know these days we have ramp load in most notebook drives with several 100,000 cycles, but still. Ralf-Peter 00:58, 6 December 2006 (UTC)


Microsoft failed so far to separate read-only components such as bootloader, kernel (and possibly kernel modules) from the regular filesystem AFAICT. Unices can boot from flash, and appliances with other OSes (think of mp3 players, routers, etc.) also often boot from flash. The advantage they use is that such mostly read-only data is not something that is updated every day, hence the flash media is not worn out too much. Someone having an idea for a hybrid drive is just driving another marketing hype. I mean, what would be the implementation like? (1) Does a Hybrid drive show me two block devices, e.g. /dev/sda and /dev/sdb -- one for the flash area, one for the platter area? Then this "Hybrid drive" would just be an enclosure around two today's drives (CF+disk), to put it bluntly. (2) Does a Hybrid drive appear as one, /dev/sda? If so, how do you control what goes into the flash area and what does not? By use of more ATA commands cluttering the standard? No thanks, then I'd rather go with (1). Oh and BTW, if booting is really _that_ slow that we need flash-driven hard disks, it is time to change OSes or remove some crap. j.engelh 00:45, 10 December 2006 (UTC)

[edit] Is this right?

The second instance is when the user must access a new file from the hard drive that is not already stored in the buffer. In this case, the platters must spin up to access the file and place it onto the buffer, whereupon the platters will once again return to an off state.

I don't know anything about this technology, but logically this seems wrong. When a file is read off disk, normally it ends up in the hard disk's (volatile) cache, as well as in the computer's main (volatile) memory cache. There's no particular advantage in filling up the (non-volatile) "buffer" with read only data. Can someone verify this? Stevage 02:18, 22 November 2006 (UTC)

I think the advantage is that the buffer is much larger than traditional disk caches, and that it can be used for writes as well as reads, since it's non-volatile. David McCabe 00:36, 25 November 2006 (UTC)
For a computer that is up 24/7, there is almost no advantage to a flash-based cache over ram at similar sizes, save cost. At similar price points, a flash-based cache should allow for larger cache sizes compared to increasing system memory which can significantly increase performance if the larger cache allows for eliminating any disk access for the item cached. The major advantage seems to be for laptop users who do complete shutdowns frequently rather than using a hibernate feature. For those usage patterns, taking advantage of temporal locality across multiple computer boots can be a significant advantage.

[edit] Biased toward Windows

computers using hybrid drives may be able to achieve extremely fast (under 10 seconds) or even near-instant boot up times.

Microsoft Windows is one of the few beasts that need more than 10 seconds to boot-up. Under 10 seconds is in no way _extremely_ fast, given the technology and computing power available today. Dont let computer users think Windows' slowness is the standard.

If you make a modification to make your your VW beetle do 0-100 in 5 seconds, I would call that "extremely fast". Doesn't matter what a ferrari or McLaren F1 would do...Stevage 23:22, 22 November 2006 (UTC)
Arguing semantics I know, but, I would reason also that it is not `extremely fast,' by your analogy the McLaren F1 is extremely fast (I'm sure 0-100 in 5 is no easy feat - but what about corner speeds), while the VW has received a significant performance boost, or any other language that describes a relative rather than absolute speed differential. -- —The preceding unsigned comment was added by 24.221.12.89 (talk • contribs) November 26, 2006 @ 17:14 (UTC).
Hm, my Fedora 6 linux laptop takes 1 min 29 seconds to load (from the moment when I select linux in GRUB to the login screen). -- Convex hull 02:48, 27 November 2006 (UTC)

Perhaps another distro is best? Archlinux with all sorts of services boots in about 30 secs on an old 2.4 GHz laptop.

when writing the initial statement, i was thinking about my QNX 6 (www.qnx.com), which takes between 10 and 15 seconds to boot up on a 400Mhz Celeron, from the moment i press the switch to the login screen, half this time for the BIOS, the other half for the OS. the OS is complete with GUI, network, browser, etc. everything a basic user would need to use a computer. Another example, a very lightweight DOS install on a 8MHz 80286 takes 5 seconds to boot, but this time, you dont have anything fancy (you dont have anything at all in fact)...
OK, I'm sure there are lightweight OSes which boot in under 10 secs from a regular drive. However, stating that "Microsoft Windows is one of the few beasts that need more than 10 seconds to boot up" is just silly. -- Convex hull 07:47, 2 December 2006 (UTC)
QNX is far from being 'lightweight'... just more efficient, and if they can be that efficient, every other OSes should also. As a side note, i just realized: QNX and every other OSes i saw booting in less than 10 seconds used microkernels, Windows and Linux and other OSes i saw booting in more than 10 seconds are MACROkernels. anyway, we are getting far off-topic. —The preceding unsigned comment was added by 81.240.46.231 (talk • contribs) 10:18, 3 December 2006 (UTC).

[edit] Improved Reliability?

I think the whole Improved Reliability thing is a complete bunk, at least on desktop computers. The most deteriorating thing a hard drive normally has to do is to spin up the platters which is effectively equal to a cold boot. For example (data taken from Samsung's random laptop HDD page):

  • - Linear Shock (1/2 sine pulse): Operating, 2ms 325 G, Non-operating, 1ms 1000 G

2ms x 325 Gs worth of acceleration is roughly equal to dropping the HDD from a height of 2 meters and that's without any case absorbtion/deformation, while 1ms x 1000G means 4.9 meters. For some reason, Samsung doesn't report start/stop cycles for laptop HDDs but browsing through various desktop HD Ds shows the following data:

  • -Start/Stop Cycles (Ambient) 50,000

50,000 start/stop cycles will normally happen in 50,000 boots = a lot, but if the HDD has to spin up, say every 20 minutes, then it's only 16,000 hours of operating time which roughly equals around 2 years - in optimal starting conditions. Not to mention the fact that while spinning up the HDD uses much more power than in normal running mode resulting in extra stress to other components (including laptop battery) and will halt all other actions in Windows XP unless you have dual-thread (core/ht/2 processors) capable computer. - G3, 04:30, 13 December 2006 (UTC)

Infact, the only place where having a hybrid drive improves reliability is on a mobile platform with sporadic needs of HDD storage (eg. car computer but not one which needs constant HDD access like data logging). In all other cases hybrid drive deteroriates the reliability (well, except if you move your desktop computer around a lot or use a laptop while eg. bungee jumping). - G3, 04:52, 13 December 2006 (UTC)