Talk:Defragmentation
From Wikipedia, the free encyclopedia
[edit] To Defrag or not to Defrag?
Dear Contributors!
Since it is unlikely to post here a PDF, I recommend to search for "A Fast File System for UNIX".
The current file name is most probably "ffs.ps". The older file name "05fastfs.ps" ten years ago is no longer existent.
GhostView will show it, you can convert it to PDF with this tool as well if you prefer Adobe Acrobat Reader etc.
I'll refer to this paper in discussing defragging.
Back in old DOS/Win16 days, there may have been an advantage for contiguous files.
Like on the PDP-11, binaries got slurped into RAM in one chunk, OS/2's and Windows' DLLs already posing a problem because no longer a single binary got loaded.
See on page "3" why defragging the "old" 7thED file system was a non-issue under *NIX then, it involved a dump, rebuild, and restore.
There also an idea published 1976 was mentioned that suggested regularly reorganising the disk for restoring locality which could be viewed as defragging.
The VAX introduced a new concept of virtual memory, demand paging. Prior to this, only swapping of segments, mostly of 64KB size, was common.
Since then, binaries are read only that far to set up the process, the "call" to main(argc,argv), note an OS *returns* to a process to facilitate multitasking, involves a page fault.
With some luck, that page is in the buffer cache, but surely the first call to a function will result in another page fault, where the luck of finding it in the buffer cache is greatly diminished and the disk block surely have been rotated away.
Page "7" of the FFS paper mentions a rotationally optimal layout, in DOS days, there were tuning programs to change the interleave factor which became obsolete when CPUs got fast enough and DMA disk access became common, the paper calls this I/O channel, and interleave factor "1" became standard.
Also, booting becomes less sequential if you extend this term beyond the loading and starting of the kernel to hardware detection and especially to loading and starting background processes and initialising the GUI and it's processes.
Linux is still sequential up to GUI start, which is parallelised everywhere, but some of the BSDs try to go parallel after hardware detection, albeit with some provision for interdependencies.
OS/2, Windows, and MacOS_X switch early to parallelised GUI mode, MacOS<=9 never showed a text screen, I don't know if MacOS_X ever shows a text screen.
Then quite a bazillion of processes contend for the disk arm, you may separate some of *NIX subtrees to different SCSI disks to limit this, albeit not too much, IDE disks are only recently capable of detaching after a command to enable parallelity.
Partitions on the same disk may aggravate the problem because they force a long seek when the elevator algorithm has to switch partitions.
Especially DLLs, due to their shared nature, shared libraries under *NIX are not that numerous and pervasive, are never in the vicinity of the binary calling them.
Thus defragmenting becomes practically irrelevant, at least for executables.
Buffer underrun protection is now common with any CD/DVD toaster due to their high speed, but the source of buffer underruns is more a process madly accessing the disk and/or the GUI than a fragmented disk which is usually faster than any high speed CD/DVD.
So defragmenting becomes irrelevant for normal files as well.
Traditional defraggers run in batch mode, which may be tolerable on a workstation after business hours, but intolerable on an Internet server which is accessed 24/7.
Also batch defraggers which don't need umounting the disk and thus can run in the background have the problem, that their analysis is likely to be obsolete at it's end so the defrag is suboptimal.
This is especially true for mail and/or news servers where bazillions of mostly small files are created and deleted in quick succession.
There would be the option of an incremental defragger which moves any file closed after writing to the first contiguous free space after and fill the gap from files below this boundary.
Over time, file shuffling decreases as static files tend to land at the beginning of the disk and the dynamic ones behind them.
A batch defrag with ascending sort over modification date may shorten this process significantly.
However, this scheme also gets overwhelmed on mail and/or news servers.
As mentioned on page "3" of the FFS paper, defragging was too costly back then, thus they decided to implement a controlled fragmentation scheme described mostly on page "8" with cylinder groups and heuristics to place files there, large files being deliberately split up.
OS/2's HPFS definitely is modelled after BFFS, Microsoft tries to hide that this holds also for NTFS.
I verified this both on NTFS 4 and 5.1 by loading a bazillion of files, including large ones, to the NTFS drive and firing up a defragger with a fine block display.
A checkerboard pattern will show up, revealing BFFS-like strategies.
Defragging this spoils the scheme and only calls for regular defrag runs.
Thus even under NTFS, defragging becomes a non-issue, this may be different for FAT.
Note NTFS is still difficult to read with a dead Windows, and practically impossible to repair.
Bad idea for production systems.
The successor to NTFS is still to be published, so no information about this is available, it will only be sure that your precious data again are practically lost with a dead Windows.
So it is reasonable to keep your precious data on FAT, or better on a Samba server.
They will be accessible for Windows' malware anyway, that is the design fault of this OS.
Even Vista will not help, the "security" measures are reported to be such a nuisance that users will switch them off.
And malware will find it's way into even with full "security" enabled.
However, XP runs the built-in defragger during idle time and places the files recorded in %windir%\Prefetch\ in the middle of free space and leaves enough gaps for new files.
Boot time is marginally affected by this.
To get rid of this, you must disable the Windows equivalent of the cron daemon which may be undesirable.
You can disable the use of %windir%\Prefetch\ with X-Setup, then these files aren't moved, but the defragmentation will still take place.
Thus it is a better idea to leave these setting as they are, file shuffling settles comparably fast.
Thus defragging becomes an old DOS/Win16 legacy which is still demanded by the users.
This demand is artificially kept up by the defrag software providers which want to secure their income, even new companies jump on the bandwagon.
Back in DOS times, Heise's c't magazine closed their conclusion with the acid comment that defragging is mostly for messies which like to watch their disks being tidied up, but only these, not their room or house.
Debian Sarge cometh with an ext2fs defragger, unusable with ext3fs, requiring umounting the disk, thus practically useless.
The mail address was dead, so no discussion possible.
However, ext2fs already follows the ideas of BFFS, so defrag should be a non-issue there, too.
ReiserFS got somewhat out of focus since the fate of Hans Reiser is quite unknown with that lawsuit for murdering his wife.
Also tests of Heise's iX magazine revealed that balancing it's trees will create an intolerable load on mail and/or news servers.
Rumours were that a defragger was thought of.
Note also that internally the CHS scheme is broken by some disk vendors, Heise's c't magazine once found an IBM drive going over one surface from rim to spindle and then the next surface from spindle to rim, creating a HCS scheme.
Also disk platters are now few to one to cope with low height profiles, even beyond laptops, disks are now 3.5" and 2.5" with heights below a third of the standard height form factor. 5.25" disks with full height, CD/DVD are half height, and ten platters as Maxtor built once are unlikely to reappear.
Also the sector zoning breaks internally the CHS scheme, but BFFS' cylinder groups are still beneficial in all these cases, it will spread disk access time and speed evenly anyway.
Conclusion: Defraggers are obsolete now, only an issue for some software providers, and probably for harddisk vendors.
Kind regards
Norbert Grün (gnor.gpl@googlemail.com) Gnor.gpl 12:05, 1 December 2007 (UTC)
[edit] OS Centric
Fragmentation is a general challenge in the field of File system design. Some filesystems are more fragmentation than others, and some feature integrated background defragmentation. It would be useful to expand this article to cover the subject of defragmentation in all of it's forms.Gmaxwell 23:22, 27 Dec 2004 (UTC)
[edit] Free space question
"A defragmentation program must move files around within the free space available in order to undo fragmentation. This is a memory intensive operation and cannot be performed on a file system with no free space."
I'd like to ask: why there should be free space on the volume being defragmented? What if not? Can't the defragmenter move files around using free memory or free space on other volumes? Of course, if system crashes during the defragmentation process, the file system is easier to recover when files are moved only on the volume being defragmented. But why it must be done so?
- Defragmenting all the files is a risky process. The normal process is AFAIK to pick where the next block of the file should go, copy what is already there to free space on the drive, verify the copy, then change the file table to reflect the change. Then it copies the block of the file to the now free spot, verifies it, then changes the file table. Doing it this way ensures that no matter what stage it crashes at, the file is still accessible and at worse you have an extra copy of the data that needs to be removed. If you have no free hard drive space and store it to memory, you would have to _move_ the information to memory, meaning you can't verify it (memory corruption does happen from time to time), and if the system crashes you lose the data. As for copying to another partition/drive, its possible, but then you are relying on two hard drives working, possibly different partition types, and you can't keep the file always accessible during the defrag or in the event of a failure because you can't have parts of the file spanning two file systems. 65.93.15.119
[edit] OS X and auto-defragmentation
OS X and its built-in defragmentation deserve some sort of a mention expansion in this article. I still think it would be worthy to go into detail on how it works. —Rob (talk) 15:13, 4 April 2006 (UTC)
FFS ?
OS X is Unix-like.Mike92591 01:44, 31 August 2006 (UTC)
[edit] Defragmentation software for Windows
Perhaps it's worth mentioning O&O Defrag, which greatly improves upon the standard defragmentation software included. You can read more about it here: http://www.oo-software.com/en/products/oodefrag/info/
Please note: "O&O Defrag V8 Professional Edition is compatible with Windows XP, Windows 2000 Professional, and Windows NT 4.0 Workstation. The Professional Edition cannot be used on Windows 2003/2000/NT servers.
O&O Defrag V8 Professional Edition and O&O Defrag V8 Server Edition cannot be installed on computers running Windows 95/98/ME."
I will not be editing the article, do as you please. boaub
I tried adding a stub article on O&O Defrag, but it got deleted, citing "notability". I wasn't about to explain that O&O software is a European company and is therefore not that well known in the USA, where the deleter seemed to come from. Donn Edwards 15:32, 7 June 2007 (UTC)
- Please see the notability guideline on how to establish notability. -- intgr #%@! 18:50, 7 June 2007 (UTC)
[edit] Windows XP Defrag vs. Other Tools
Is there a big advantage if you use a special defragmentation tool and not that internal thing in Windows XP? 172.174.23.201 22:56, 30 November 2006 (UTC)
- The internal defragmenter can have problems on severely fragmented partitions which contain mostly large files. During defragmentation it requires that a block of continueous free space is available as large as the file it is trying to defragment. However, sometimes the free space on a drive can be so fragmented that it is impossible for the defragmenter to even defragment a single file. This only occurs on partitions that have many large files that have been growing (in parallel) for long periods of time (like a download partition) which are then subsequently removed to be stored elsewhere.--John Hendrikx 09:44, 28 May 2007 (UTC)
[edit] New article titled "file system fragmentation"
I was somewhat dissatisfied with this article in the current state, so I decided to approach the problem from another angle in the new article "file system fragmentation". More details about my motivation at Talk:File system fragmentation#Reasons for duplicating "defragmentation" article. Please don't suggest a merge just yet. I would like to hear anyone's thoughts, comments, and criticisms though. -- intgr 03:58, 14 December 2006 (UTC)
[edit] Common myth and unreliable sources
I am fairly confident that the sections claiming Unix file systems (and/or ext2+) don't fragment when 20% of space is kept free, is nothing more than a myth and wishful thinking. The sources cited by this article do not appear to be written by people particularly competent in the field of file systems, and thus do not qualify as reliable sources per WP:RS. I have read quite a few papers for the article I mentioned above and I can promise to eat my shorts the day someone cites a real file system designer claiming this, as it will be a huge breakthrough in file system research. :)
Does anyone disagree, or am I free to remove the offending claims? -- intgr 12:37, 14 December 2006 (UTC)
- Done, removed -- intgr 02:14, 24 December 2006 (UTC)
- I can confirm it is a myth (disclaimer: I wrote Smart Filesystem). When using ext in a certain way it can fragment just as badly as most other filesystems. The usage pattern that is very hard to handle for most filesystems is that of many files growing slowly in parallel over the course of weeks. These files will be severely fragmented as they tend to weave patterns like ABCBAABCABBCCA when stored on disk due to their slowly growing nature. The files can end up to be several dozen megabytes in size so even if a filesystem will try to pre-allocate space for slowly growing files the fragmentation can get very bad. From there it only gets worse, because when such a file is removed, it will leave many gaps in free space, which will compound the fragmentation when it needs to be reused. Keeping a certain amount of space always free can help to reduce fragmentation but will not prevent this usage pattern from eventually degenerating.--John Hendrikx 09:55, 28 May 2007 (UTC)
Fragmentation is very rarely an issue on *nix filesystems. It's not that they don't fragment, but rather that a combinitation of allocation algorithms and reordering of requests to optimise head movements effectively negate the issue. This is sufficiently effective to the level that the standard procedure to defragment a volume amongst Solaris admins is to backup and restore. Why this is still an issue with Windows/ntfs, I have no clue, it obviously shouldn't be. A good explanation here: http://www.salmar.com/pipermail/wftl-lug/2002-March/000603.html Lnott 14:03, 7 February 2007 (UTC)
- Note a few things:
- The mailing list post compares ext2 to the FAT implementation of MS-DOS.
- All modern operating systems use quite similar approaches to readahead, page cache, elevator algorithms, etc.
- Neither is fragmentation a big issue on non-Unix file systems. (Do you have a reliable source supporting your claim of NTFS fragmenting more than Unix file systems?)
- How well a file system performs depends primarily on access patterns. Under certain loads, fragmentation can become a big issue with any file system, hence why defragmenters are necessary.
- I cannot offer a valid counterargument about NTFS allocation algorithms, as very little is known about its implementation. The article file system fragmentation documents proactive fragmentation reduction techniques (though cylinder groups is still on the TODO list). But in short, it's all about access patterns, not fragmentation.
- -- intgr 19:20, 18 February 2007 (UTC)
[edit] Myths
The article is not bad as it stands -- but it avoids mentioning the most culturally important aspects of defragging. *** As the article states, fragmentation is properly entirely a filesystem speed/performance issue. As the article does not mention, the performance impact of using a fragmented system may actually be minor. There is very little credible objective real-world information available about this. The article does not mention that many Windows users believe that it is very important to defrag very frequently. The article does not mention that defragmentation is risky, since it involves moving all the files around. *** The article suggests that newer larger hard drives have more of a problem with fragmentation. The opposite may be true: Fragmentation may be less of a problem when volumes have lots of free space, and new hard drives are so large that many people are only using a very small percentage of the space. *** Most Windows users imagine that defragging is necessary to keep their systems from crashing. Vendors and magazine article writers encourage this delusion. But no properly functioning OS will crash because files are fragmented -- computers are designed to function this way. If they couldn't, they would not allow files to be fragmented to begin with!--69.87.193.53 18:50, 18 February 2007 (UTC)
- "The article does not mention that defragmentation is risky, since it involves moving all the files around."
- Because it's not. At least NTFS logs all block moves to the journal, so even if your computer loses power during defragmentation, it can restore a consistent file system state after booting.
-
- It depends on the defragmenter used and what filesystem you are defragmenting. For example, ReOrg 3.11 for Amiga systems used to scan the entire filesystem, calculate the optimal block layout in memory (for every block, including meta data) and then would start to make passes over the disk (using an algorithm that moved like an elevator over the disk) caching as much as possible in memory on each pass, and writing out the data to the new locations as the "elevator" passed over their new location on the disk. It was a very satisfying process to see in action, and it was very fast due to the large caches used, but also very risky since a crash during defragmentation would leave the filesystem in a completely garbled state not to mention losing everything that was cached in memory at the time.--John Hendrikx 10:07, 28 May 2007 (UTC)
- "The article suggests that newer larger hard drives have more of a problem with fragmentation. The opposite may be true"
- Fair point, though that doesn't apply to enterprise use (e.g., large file server clusters). -- intgr 19:29, 18 February 2007 (UTC)
-
- "If your disks use NTFS then you're even safe when the computer crashes in the middle of defragging. Nevertheless, it's still a good idea to backup before defragmenting, just like with other defragmenters, because the heavy use of the harddisk may trigger a hardware fault."[1]
- It is just plain stupid to take the giant risk of moving around all of the files on your disk unless you have a full independent backup. And unless you have a damn good reason. And since most users will never understand what is involved, it seems irresponsible to encourage them to get involved with defragging on a regular basis.--69.87.194.65 01:25, 28 February 2007 (UTC)
- "If your disks use NTFS then you're even safe when the computer crashes in the middle of defragging. Nevertheless, it's still a good idea to backup before defragmenting, just like with other defragmenters, because the heavy use of the harddisk may trigger a hardware fault."[1]
-
-
- Yes, use of the hard disk may trigger a hardware fault, whether you're defragmenting or using the disk for other purposes, so you should have a backup anyway. Or even if you are not using your disk and it's collecting dust on the shelf, you'd still better have a back up since your house can burn down.
-
-
-
- The majority of premature hard disk failures are caused by manufacturing errors and mechanical impacts. Manufacturing errors mean that the disk will fail sooner or later anyway. The most prevalent kind of hard disk failures, plain simple media failures, are not dependent on the use of the hard disk at all. Defragmentation does not incur any "giant risk", merely a slightly higher chance of spotting an error sooner rather than later. Also note that decent defragmentation software will minimize the amount of files that would actually need to be relocated, and will not do anything if there is nothing to defragment. (while indeed some inadequate commercial defragmentation software will relocate all files on the disk, which is obviously redundant and unnecessary). -- intgr 10:05, 28 February 2007 (UTC)
-
I say toss the entire section; there may have been no use for defrag back when the Hard drives were only 50Mb; but I just bought some 750GB hard drives before the holiday last year and even the manuals tell me to defrag my drives, as it increases the life of the disk. Concerns over moving around files are unwarranted. Simply put, in a windows environment, every time you load a file, there is a slight change to it. That's even greater on Microsoft specific files, such as Office documents, files loaded in Media Player, etc, where Microsoft's software makes a tiny note to the file each time it's loaded. Beyond that, Windows by its very nature constantly moves files around the disk. The MSDN forum has an entire spread dedicated to discussing this fact. The statements themselves are wholly POV, as what one person does or does not notice depends on what they use the drive for, the size of the drive, their perceptive abilities, and their habits. If not tossing the statements, then moving them to another section, and listing varying degrees of perception regarding performance gains as the single verified con to defragmenting, against the world of good it does. —The preceding unsigned comment was added by Lostinlodos (talk • contribs).
[edit] Defrag and performance improvements
I reverted this recently-added statement from the article since it did not tie into the text, although it does point out that the performance results/improvements are not as black and white as the article makes it sound.
- Although it may produce substantial filesystem speed improvements in some cases, for the typical Windows user the overall performance improvement may be minor or unnoticeable( this information IS NOT CORRECT (http://findarticles.com/p/articles/mi_m0FOX/is_13_4/ai_55349694).
Although I do realize that even though benchmarks may point out an X% increase in performance, the user might not notice it, since the performance was never a problem to begin with. But anyway, if anyone can find the time, some of this should find its way to the article. And note that the current POV in the particular section is unsourced as well. -- intgr 22:14, 6 March 2007 (UTC)
There is no doubt, that in some circumstances defragging may result in giant improvements in some measured ≤performances. Which has almost no bearing on which real-world users, in which real-world circumstances, will actually experience noticeable improvements from defragging, and how often they should do it. Companies that sell defragging programs are quite biased sources, and companies that are paid to advertise such software are also suspect. (In the world of technology, differences of a few percent are often considered important. In the world of humans, differences of less than 10% often are not noticed, and it may take a difference of about a factor of two to get our attention. An order of magnitude -- a factor of ten -- now there is a real difference!)-69.87.200.164 21:23, 5 April 2007 (UTC)
Defragmentation on NTFS volumes is only an issue when the OS must make many small writes over a period of time, e.g. a busy Exchange or database server. The Wikipedia article should not give the impression to the average user that defragmentation will usually result in performance gains. A brief mention of the edge cases where defragging can be useful might be worth a brief mention. Many people falsely believe that every so often they need to defrag their drive, when actually that work is done automatically by the OS, and even then the effect will not usually be noticeable. The "placebo effect" of defragging your computer "manually" by watching a multi-colored representation of the hard drive layout as it slowly rebuilds into a human-recognizable pattern probably accounts for why people persist in believing the defragging myth. BTW, The article cited above purporting to prove the need for defragmentation was from year 1999, which is too old. 72.24.227.120 19:52, 10 April 2007 (UTC)
Defragmentation alone will do very little. Proper partitioning is just as important and I agree that defragging one huge C-drive will not help much if you have too little memory (constant swapping) or if you are using software that uses a lot of temporary disk space like Photoshop with no dedicated partitions for them defined. As for the noticability I just came from a computer that has not been defragged for two years and has 80% of its disk full. It loads XP in about 2 minutes whereas this one loads it almost at no time. The same goes for saving data on the disk - it takes ages to save a Photoshop file on the heavily fragmented one whereas this one saves it in a couple of seconds. It seems to me that the critics have only been playing with almost-empty file systems and simple utilities and NOT the real world of A3-sized Photoshop graphics, for instance. But as said, you need proper partitioning there as well.
[edit] Free space defragmentation
What I didn't see in the article is that there are really two goals in defragmentation, to defragment files and to defragment the remaining free space. The latter is by far the more important one to prevent new fragmentation from occuring. If free space is severely fragmented to begin with, the filesystem will have no choice but to fragment newly added files. Free space fragmentation is the biggest cause of file fragmentation for almost any filesystem. It is the reason why experts as a general rule say you should always keep a certain amount of free space available to prevent fragmentation. Letting disks fill up and then removing files and letting them fill up again will result in bad fragmentation as the filesystem will have no choice but to fill the remaining gaps to store data instead of making a more educated guess from other free areas. Note that even this rule will not remove the need for defragmentation, it just delays it.--John Hendrikx 10:22, 28 May 2007 (UTC)
Please do add this to the article - and cite some sources. Tempshill 20:44, 21 August 2007 (UTC)
[edit] Windows Defrag Utilities
The list of utilities grew to the following, which is fairly comprehensive
Commercial (Windows):
- Abexo Defragmenter Pro Plus [2]
- Ashampoo Magical Defrag [3]
- Buzzsaw (on the fly defragmenter) and DIRMS (removes tiny spaces) [4]
- DefragMentor Premium [5]
- Diskeeper [6]
- hsDefragSaver [7]
- Mindsoft Defrag [8]
- mst Defrag [9]
- O&O Defrag [10]
- Paragon Total Defrag 2007 [11]
- Power Defrag [12]
- PerfectDisk [13]
- PerfectDisk Rx Suite [14]
- Rapid File Defragmentor [15]
- UltimateDefrag [16]
- Vopt [17]
Freeware (Windows):
- Auslogics Disk Defrag: A free defragmentation program for NTFS.[18]
- Contig: A command-line based defragmentation utility.[19]
- DefragMentor Lite CL: a command line utility.[20]
- IOBit SmartDefrag [21] (Beta software)
- JkDefrag: A free (GPLed) disk defragment and optimize utility for Windows 2000/XP/2003/Vista.[22]
- Microsoft's Windows Disk Defragmenter (already included in most versions of Windows)
- PageDefrag: Runs at startup and attempts to defragment system files that cannot be defragmented while they are in use.[23]
- Power Defragmenter GUI [24]
- Rapid File Defragmentor Lite: command line utility.[25]
- SpeeDefrag [26]
- SpeedItUp FREE [27]
So which ones to mention, and which ones to ignore? The most commonly reviewed products are:
But articles on the second two are regularly deleted on Wikipedia, because of "notability", even though the computer press regularly reviews them. I agree that the others are "also rans" but surely an encyclopedia is supposed to be thorough, rather than vague. If the question of links is a problem, then delete the links, not the list of products.
The most notable freeware products are JkDefrag, Contig and PageDefrag, but why should the other products be ignored simply because they can be ignored?
The deleting in this section is particularly draconian and heavy handed, IMHO. Donn Edwards 20:57, 9 June 2007 (UTC)
- "But articles on the second two are regularly deleted on Wikipedia, because of "notability""
- The speedy deletion criteria states "article about a person, group, company, or web content that does not assert the importance of the subject." — not that the subject is nonnotable. Please read the notability guideline on how to establish notability. The notability criteron is often uses as an "excuse" for deleting articles that have other problems as well, such as WP:NPOV or WP:V, since the notability criterion is much easier to assess objectively than other qualities of an article.
- As for the removal of the list, my edit comment said it all: Wikipedia's purpose is not to be a directory of products, links, etc. If there's no further information about the given products on Wikipedia then the list is not much use (per WP:NOT). This is not the kind of "thoroughness" Wikipedia is after — I'd much rather see people working on the substance of the article (which is not very good at all) rather than lists. Such indiscriminate lists are also often target to spammers and advertising, for products that in fact are not notable — I intensely dislike them.
- I do not understand what you mean with "draconian and heavy handed" though; I thought the list was bad and boldly removed it; it was not a "punishment" for anything. If there's a good reason to restore the list then reverting the edit is trivial. -- intgr #%@! 23:39, 9 June 2007 (UTC)
What's so "notable" about Diskeeper? The Scientology uproar? The problem with the rapid deletion of other articles is even if there is a stub in place, the article gets deleted, so no-one ever gets a chance to contribute or discuss. It just gets nuked. Some of the entries I made didn't even last 1 week. That's draconian. Donn Edwards 17:11, 11 June 2007 (UTC)
- Then write a stub that asserts the notability of its subject, and avoid getting speedily deleted for non-notability. It also helps if you cite a couple of sources in the article. Tempshill 20:44, 21 August 2007 (UTC)
[edit] defrag meta-data
FAT file systems suffer badly from fragmentation of the directory system. Defrag systems that defragmented the directory system made a dramatic difference. Cut down defrag systems that did not defrag the directory system made very little difference except to people intensively using very large files. NTFS has a different and more complex meta-data system. NTFS directories are less susceptible to fragmentation. File fragmentation on NTFS, as on a FAT or Unix file system, normally makes very little difference, and defrag systems that only defrag the files normally make very little difference. You need a third party defrag utility to defrag the meta-data on NTFS partition, and if the meta-data has become badly fragmented, defragmentation of it makes a dramatic difference. If you optimise a file to one location on disk, but the meta-data is spread out, you've gained nothing, and a defrag the only defrags the files does nothing for you.
PerfectDisk claims to be the only defrag utility that successfully defrags metadata RitaSkeeter 17:00, 19 September 2007 (UTC)
[edit] Risks for defragmentation?
Aren't there risks associated with the abusing of defragmentation on a hard drive? WinterSpw 17:24, 20 July 2007 (UTC)
- I suppose that excessive churning increases the wear on the parts, but can't cite a source. Tempshill 20:44, 21 August 2007 (UTC)
-
- Defragmentation software vendors claim that less seeking with a defragmented drive more than compensates for the wear done during the defragmentation process; I don't think any studies have been done to test this. However, Google's study on hard disk reliability indicates that there is very little correlation between hard disk load and failure rate. -- intgr #%@! 22:33, 22 August 2007 (UTC)
- Steve Gibson, author of Spinrite, claimed on the Security Now podcast that "excessive" defragmentation would only reduce the lifetime of the drive to the extent that the drive was being used during the defragmentation process. HTH! RitaSkeeter 17:03, 19 September 2007 (UTC)
[edit] Apple II
The Apple II had a couple of utilities that would defrag its 5.25" floppy disks. I was glad at the attempts in this article to try and approach this as a general issue for all operating systems and disk types. I think the article might benefit from brief mentions of non-hard disk defragmenting: the lack of a benefit for Flash drives would be useful to mention. Tempshill 20:44, 21 August 2007 (UTC)
- Agreed; conceptually, defragmentation (and fragmentation) relates to file systems, not storage media (e.g. floppies or disks). This article should be more careful with its use of the term "disk". -- intgr #%@! 16:26, 28 August 2007 (UTC)
[edit] History of the Defragmentor
Does anyone know about the history of the defragger? I tried looking it up, but don't care enough to look further. However, a mentor of mine told me that the defrag program was designed for a contest that Intel threw a decade or three ago (not sure at all)...
Intel: We have a super good processor! No one can make it max out! No one! In fact, we will give prize money to whoever CAN max it out, because we just KNOW no one can! Some guy: Hah! I made a program that defragments your hard disk! It's useless, but it's so energy consuming that it maxes out your processor! Intel: Dangit, there goes our money! Some guy: Bwahahahah!
Well, I'll assume people saw a big use in it, but obviously it was too slow at the time.
Anyway, I'd research it myself, but I can't find anything and don't care to delve further. I'd put it up, but I don't remember the whole deal, don't have sources, and don't know if this story is even right!
DEMONIIIK (talk) 03:58, 5 December 2007 (UTC)
Earliest defragmentor I know of was made for DEC PDP-11 (RSX or RSTS/E OS) by Software Techniques in early 1980s. They then created the first disk defragmentor for the DEC VAX running VMS in 1985. talk 12:56, 3 February 2008 (UTC)
[edit] Misleading explantation, lack of discussion of alternate strategies
I just skimmed through this article, it seems to have several fairly serious problems. The foremost is that the explanation of fragmentation describes how the old, deprecated msdos file system (fat, fat16, fat32) works. This filesystem was so badly prone to fragmentation that even microsoft abandoned it ... what ... over a decade ago, now? ... by introducing ntfs, and slowly incorporating it into its windows product line.
I'd like to see a discussion of fragmentation avoidance strategies. I think many of these have now been developed; I was hoping to see some discussion of their merits, and relative performance. A discussion of things like wear-levelling for usb sticks (flash ram) would also be appropriate.(think of wear-leveling as a fragmentation-avoidance in time rather than in space; the goal is to use sectors evenly, so that one part of flash doesn't wear out prematurely. In general, flash drives don't have the seek-time problem (or less of one) so the fragmentation issue is not the problem like it is for hard drives. Rather, its the wear that becomes more of a consideration. 67.100.217.180 (talk) 19:19, 5 January 2008 (UTC)
[edit] NTFS disadvantage: smaller default-clustersize
NTFS has smaller default-clustersizes than FAT, which increases the propability of fragmentation. --Qaywsxedc (talk) 06:03, 29 February 2008 (UTC)