Talk:Solid-state drive
From Wikipedia, the free encyclopedia
This article:
- Does not conform to WP:MOS and therefore needs to be wikified, hence {{wikify}}
- Has no categories, hence {{uncat}}
- Does not give sources, hence {{sources}}, and
- Has been proposed for merging. The merging should be discussed, tag cannot be removed before it is discussed. Rich257 09:11, 11 October 2006 (UTC)
This article now has Category:Solid-state computer storage media. It still needs the other tags. Athaenara 01:51, 10 November 2006 (UTC)
[edit] Merge completed
The merge was advertized on one talk page for 3 months (Oct 2006) with no objections, and agreed 5-0 on the other. I have therefore merged them fully, see Talk:Solid state disk. FT2 (Talk | email) 01:34, 18 January 2007 (UTC)
The decision to move this article with opinions of five people without basing it on actual research was wrong. It violates the no original research principle. Since you already achieved this thetan level why don't you move RAM disk too? While you're at it, pay a visit to logical disk as well since logical disks are not actual circular objects. SSG (talk) 00:30, 11 March 2008 (UTC)
[edit] Reference
"Subsequent investigations into this field, however, have found that data can be recovered from SSD memory." I think it's appropiate to put a reference for that statement. —Preceding unsigned comment added by 72.50.39.149 (talk) 23:29, 11 November 2007 (UTC)
[edit] Giga versus Gibi ?
Hard drives are measured in giga bytes, memory is measured in gibi bytes, what are SSD disks measured in? —Preceding unsigned comment added by 62.95.76.2 (talk) 15:01, 18 January 2008 (UTC)
- I second that question. There seems to be evidence that SSD is measures using decimal notation like HDD, but I'm unsure if there is more that is required for formatting a SSD which would account for this variance. pattersonc (talk) 01:56, 2 March 2008 (UTC)
- The answer is: Both SSD & HDD manufactures use decimal notation where 1MB = 1 million bytes. (SOURCE) pattersonc (talk) 02:04, 2 March 2008 (UTC)
[edit] "For example, some x86 architectures have a 4 GB limit"
I thought ALL x86 architectures had a 4GB limit because that's the limit of combinations of a 32bit memory address, wasn't that one of the prime reasons for switching to 64bit standard.--KX36 14:57, 8 February 2007 (UTC)
- I think not, modern x86 CPUs support PAE (although 32-bit Windows generally doesn't, except for some rare drivers) so they can address more than 4 GB at the cost of lower performance.
However, you CAN'T extend the 4 GB limit with a swap file. Allocated pages on swap file are no different than allocated pages in RAM, their address must still be within the 4 GB total. Once you have 4 GB of RAM you don't even need a swap file, it will not be used.
Agreed. Operating systems don't just re-address memory pell-mell when they swap it to disk. Here's a good article on the 4GB limit and memory addressing in general: Understanding Address Spaces and the 4GB Limit ◗●◖ falkreon (talk) 05:30, 10 December 2007 (UTC)
[edit] "First company"
The section on history was both inaccurate and remains full of holes.
The first company to launch a flash-based solid state drives did certainly not do so as late as 1995, since Psion PLC was already selling its "SSD" units from 1989 onwards. See for example http://3lib.ukonline.co.uk/historyofpsion.htm
I have no idea who was the first company to do so, but Psion sold "solid state" drives from 1984. The earlier ones were UV-EPROMs or battery-backed static RAM, with flash models introduced later. —The preceding unsigned comment was added by CecilWard (talk • contribs) 21:19, 24 February 2007 (UTC).
Campbridge Computing had one in their Z88 portable computers (EEPROM based) as early as 1988 also, and was viewed at the time as one of the most innovative products on the market. - John Hancock
[edit] Read/Write Cycles
I'm not sure if this is marketing talk or not, but since there's no source cited in the disadvantages section I think this is apt:
"Q: Is there currently some sort of technical limitation on the creation of SSDs other than cost, and what about the reliability of flash media?
A: Historically SSDs were limited in the number of R/W cycles. However, with modern flash technology and error correction, the reliability of the flash in a PC exceeds 10 years. " [1]
The Compact_Flash article states a read/write cycle up to 300,000.
The Read-only_memory#EPROM.2FEEPROM.2FEAROM_lifetime article states up to 100,000.
[edit] Read/Write Cycles
The claim that was on this page that endurance is not a problem, with the reference to storagesearch, is incorrect. It is true that if the hard disk were overwritten as a whole, over and over and over again, it would last a very long time. The problem is that's not a common access pattern. Under GNU/Linux, if you're running a web server, /var/log/apache/access.log will get written with each access. If you're getting an access once every second, that means you're overwriting the same spot on the hard disk 86,400 times per day, and your SSD fails after 2-3 days tops (real-world Flash gets 100,000 write cycles typically, and 300,000 on the high-end. 1-5 million are slightly exaggerated marketing figures, and at least the high-end of that is not actually achieved with today's technology). With a desktop GNU/Linux box, there are log files that get written many times per day. Access times get marked on common files every couple of minutes at most. Similar issues exist with Windows. Flash drives used naively will fail after at most a few months use on the desktop. Many embedded network devices come with Flash for log files, but the Flash is a replaceable part, and typically wears out after some use and needs to be replaced.
I've seen both the desktop and the embedded failures occur (on the desktop, with a naive user, using a CF-IDE converter, and in the embedded case, replacing Flash was just standard maintenance). I haven't seen the server case occur, because all the sysadmins I know using SSDs are intelligent enough to manage the endurance issues.
The failures can be mitigated through the use of intelligent software. OLPC spurred the rapid development of Flash-optimized files systems for GNU/Linux. These intentionally stagger writes over the whole drive, so that no single block gets worn down. Hybrid Flash/non-Flash drives use the Flash as a cache, and again, can intelligently manage the part of the Flash that gets used with each write. All-Flash drives have their place, and can be managed to not fail, but the endurance issue does occur, and does need to be managed. Many SSDs have firmware to manage this, but many of the SSDs I have dealt with do not. It is an issue the user needs to be aware of. I have corrected the page to reflect that. 68.160.152.190 21:32, 4 June 2007 (UTC)
Is there any citation for
1-5 million are slightly exaggerated
value. Disclaimer: I am NOT working in any company - just curious.
Can you list models or names of SSDs with and without "wear-leveling" ? —Preceding unsigned comment added by Whyagainwiki (talk • contribs) 18:52, 27 March 2008 (UTC)
[edit] Merging RAM disc article
I am against it, as RAM discs are different. SSDs use non volatile memory. RAM is not non volatile. --soum talk 16:35, 7 June 2007 (UTC)
- I'm against too, they're completely different things. - 83.116.205.167 07:52, 17 June 2007 (UTC)
- Against it. Second above, and add that ram disks are virtual; SSD are physical. Arosa 21:43, 18 June 2007 (UTC)
- Same here - not the same thing at all (although the RAM disk article contradicts what Arosa said...)! I'll remove the tag. Spamsara 22:07, 24 June 2007 (UTC)
- Against it here, for Soumyasch's reasons. Makes no sense to combine the two when they are inherently different technologies, even if they share some common applications. Would be like combining 'car' and 'bike' because they could both be used to get to work, and had wheels. - 203.206.177.188
- Same here - they are completely different, SSD are hardware allocated, where RAM Disks can be created from system ram, or hardware allocated. SSD are hybrid memory. Ram Disks are volatile memory.
I am in favor of merging RAM-disk and Solid State Drive. They are essentially the same devices. They both use Ram to act like a disk drive. The fact that one is volatile and the other not does not seem to be a significant difference. In which section would you put a RAM-disk with a battery backup? --FromageDroit 13:59, 28 August 2007 (UTC)
RAM-disk and Solid State Drive are not the same, as RAM is Volatile, SSD is not. so Merging not needed, maybe an Link to Ram-disks at the bottem of the page if its not all ready there. Leexgx 15:53, 24 October 2007 (UTC)
We're arguing over symantics. On the "RAM disk" page that's marked for possible merging, the article itself states that it can be one of two things. So we're talking about two different things here. I suggest that a disambiguation page is created. It will point to either this SSD page, which could be merged with half the RAM disk article; or the RAM disk's "software abstraction that treats a segment of random access memory (RAM) as secondary storage". I used the latter extensively and can attest to it being a different beast entirely. And yes, SSD's that use volatile memory as a hardware component are indeed the same thing, just volatile.◗●◖ falkreon (talk) 05:49, 10 December 2007 (UTC)
[edit] Read/Write Cycles (another one)
When calculating the endurance level of the hardware, the article claims "blocks are typically on the order of 1kb and an 8 GB disk will have 8,192 blocks", which unless I'm very much mistaken is out by a factor of 2^10, i.e. 8,192 * 1Kb = 8Mb, not 8Gb.
[edit] Read/Write Cycles (yet another one)
As the previous comment indicated, the description of wear leveling in the text is not only very naive but also very wrong.
The nature of NAND flash is such that one typically must erase whole 128KB blocks, and then consecutively write 64 2KB pages within that block. There are no rewrites, so in order to rewrite a single 2KB page, the entire 128KB block must be erased. So, with a naive implementation the endurance of a single block drops 64 times (to about 160 rewrites?).
The main problem is not wear leveling itself, but how to avoid the need to rewrite existing data, that is how to avoid fragmentation. This problem is not 100% solvable in general unless once can predict the future. One hardware solution is to cache some data in a battery-backed RAM to avoid immediately rewriting it.
24.4.151.152 17:24, 29 September 2007 (UTC)
The problem can be solved by more innovative filesystem designs, such as Log-structured file systems. Segmentation is a non-problem for such file systems. You can always predict with 100% certainty where you're going to write next, although you still can't predict when you're doing so.
202.71.231.218 (talk) 2008-02-21T06:54 —Preceding comment was added at 06:57, 21 January 2008 (UTC)
[edit] Server mentions
There is insufficient mention of SSD usage in servers. Because the primary bottleneck on many types of servers is I/O from many users (and thus random I/O), SSDs are often considered superior to RAID HDD, assuming one is willing to pay the price.
128.113.167.175 17:15, 1 October 2007 (UTC)
- Unfortunately, word of mouth is not enough for Wikipedia. We need published sources to cite. Can you help?--soum talk 17:41, 1 October 2007 (UTC)
- A google for "random i/o bottleneck" shows plenty of sources for this, i'm not experienced enough to edit this, please be my guest to help with this :)
Talrinys (talk) 23:50, 2 January 2008 (UTC)
[edit] MacBook Air
The high-end version of the macbook air uses a 64 GB solid state drive. Some mention of this application might be warranted. —Preceding unsigned comment added by 67.116.239.156 (talk) 19:03, 15 January 2008 (UTC)
- The mention of the Air under Availability states that the 64 GB SSD "Boasts better reliability.... 80GB PATA drive." A couple of problems here. I haven't looked into it yet, but I seriously doubt that they aren't using SATA in new macbooks. Secondly, and more importantly, where is it said that the SSD offers better reliability? I don't necessarily doubt it, but is there a source for this statement? Ferrariman60 (talk) 22:44, 15 January 2008 (UTC)
- http://store.apple.com/Apple/WebObjects/dkstore?node=home/shop_mac/family/macbook_air&cid=OAS-EMEA-KWG-DK_LAPTOP-DK&aosid=p202&esvt=GODKE&esvadt=999999-1200820-1079263-1&esvid=100504 says it's PATA. No harddrives even need SATA yet, so doesn't really matter, except for the ancient standard, cables etc. SSDs just are plain more reliable, it's in the technology itself. Unless it has bad chips chips it simply just can't crash randomly as a mechanical harddrive can. However it will die after a specific amount of transfers. This will be a lot easier to account for than the random failures we have now. Talrinys (talk) 12:06, 26 January 2008 (UTC)
Lots of other laptops have 64 GB SSDs - yet this article is now sprinkled with references specifically to MacBook Air. Most of these references look rather redundant to me. --Romanski (talk) 22:05, 29 January 2008 (UTC)
[edit] Solid State and Tubes
Please take out the line in the intro refering to vacuum tubes. The term "Solid State" has always refered to semiconductors and only semiconductors, not tubes. It has to do with the fact that dopants are difused into the silicon while in a solid state in a way that mimics diffusion in liquid. 75.55.39.21 (talk) 21:25, 15 January 2008 (UTC) Sandy
[edit] Disadvantage? - "Vulnerability to certain types of effects,...magnetic fields..."
Working with some of the largest magnetic fields found in the industrial world, SSDs are the only allowed hard-drive replacement in a production environment. Mechanical hard drives die pretty much instantly or are rendered un-usable/erased.
Maximum gauss is ~ 2,000. With an average of 40-80 Gauss in normal walkway area's. Some equipment is under ~ 80-400 Gauss, all flash based storage. Magnetic field source is a DC current at ~200-350kA.
Main issues are with DC-DC converters under such conditions, so the potential for power supply in the SSD to fail rather than the flash itself.
Hard-Disks will NOT survive this environment. Flash based SSDs are the only storage device able to withstand these conditions.
Should this "disadvantage" be moved to an "advantage"?
Badbiki (talk)
The mentioned disadvantage is ridiculous. "Compared to normal HDDs (which store the data inside a Faraday cage).". Is there anything preventing SSD's from being put in the same cage? Are there any references that say no SSD is put in those? I hate seeing such ridicule creep through, and more importantly, stay. SSG (talk) 00:02, 11 March 2008 (UTC)
[edit] Vandalism
Semi-protection vandalism lock needs to be placed.--Kozuch (talk) 21:13, 5 February 2008 (UTC)
[edit] Overall quality
Hi!
I apologize for the critical view of my post and I respect the people who contributed to this article. However I feel that it does not meet the minimal criteria for a respected Wikipedia article.
First of all the article seems fragmented. When we say SSD we mean permanent solid storage, not battery-aided DRAM. It's just like saying DRAM is non-volatile just because you own a UPS. Flash sticks or other solid state media (SD, CF, MS) even they are technically solid-state storage, they are not solid-state-DRIVES -- they're memory cards. So, first of all, FOCUS on the subject.
Secondly article lacks of scientific information or any data a technically curious or a computer science student or researcher may find of any value.
Thirdly I believe *links* (not actual figures, we're not a HW review site, unless numbers are solid and confirmed) to benchmarks should be included, many potential SSD byers may advise this page.
Forth: Sections are fragmented, article lacks of an overall coherence. It seems that it was written by an uncoordinated team, by separate people with their ideas popping here and then. Sorry to say that but article is messy.
Regrettably I believe that it should be marked for quality review in order to meed minimal quality standards.
Thank you and I apologize for criticizing other's hard work.
Galanom (talk) 07:36, 12 February 2008 (UTC)
- Thank you for your suggestion. When you feel an article needs improvement, please feel free to make those changes. Wikipedia is a wiki, so anyone can edit almost any article by simply following the Edit this page link at the top. The Wikipedia community encourages you to be bold in updating pages. Don't worry too much about making honest mistakes — they're likely to be found and corrected quickly. If you're not sure how editing works, check out how to edit a page, or use the sandbox to try out your editing skills. New contributors are always welcome. You don't even need to log in (although there are many reasons why you might want to). -- ShinmaWa(talk) 18:38, 18 February 2008 (UTC)
[edit] Market analysis
Some "price per MB/time" or other graph would be nice...--Kozuch (talk) 00:12, 24 March 2008 (UTC)
[edit] Plagiarism
Most of the History section of this article seems to be lifted verbatim from the first cited source. See http://www.storagesearch.com/chartingtheriseofssds.html Notably, certain phrases have been removed, such as "In Q1 2002 - SSDs were 4th most popular subject with our readers." —Preceding unsigned comment added by 71.198.65.9 (talk) 14:51, 23 April 2008 (UTC)