In the informal (i.e. journalistic) technology press, and in online technology blogs and discussion forums, one commonly encounters anecdotal advice to leave some amount of space free on hard disk drives or solid state drives. Various reasons for this are given, or sometimes no reason at all. As such, these claims, while perhaps reasonable in practice, have a mythical air about them. For instance:
Once your disk(s) are 80% full, you should consider them full, and you should immediately be either deleting things or upgrading. If they hit 90% full, you should consider your own personal pants to be on actual fire, and react with an appropriate amount of immediacy to remedy that. (Source.)
To keep the garbage collection going at peak efficiency, traditional advice is aim to keep 20 to 30 percent of your drive empty. (Source.)
I've been told I should leave about 20% free on a HD for better performance, that a HD really slows down when it's close to full. (Source.)
You should leave room for the swap files and temporary files. I currently leave 33% percent free and and vow to not get below 10GB free HDD space. (Source.)
I would say typically 15%, however with how large hard drives are now adays, as long as you have enough for your temp files and swap file, technically you are safe. (Source.)
I would recommend 10% plus on Windows because defrag won't run if there is not about that much free on the drive when you run it. (Source.)
You generally want to leave about 10% free to avoid fragmentation (Source.)
If your drive is consistently more than 75 or 80 percent full, upgrading to a larger SSD is worth considering. (Source.)
Has there been any research, preferably published in a peer-reviewed journal, into either the percentage or absolute amount of free space required by specific combinations of operating systems, filesystem, and storage technology (e.g. magnetic platter vs. solid state)? (Ideally, such research would also explain the reason to not exceed the specific amount of used space, e.g. in order to prevent the system running out of swap space, or to avoid performance loss.)
If you know of any such research, I would be grateful if you could answer with a link to it plus a short summary of the findings. Thanks!
Answer
Has there been any research, preferably published in a peer-reviewed journal […]?
One has to go back a lot further than 20 years, of system administration or otherwise, for this. This was a hot topic, at least in the world of personal computer and workstation operating systems, over 30 years ago; the time when the BSD people were developing the Berkeley Fast File System and Microsoft and IBM were developing the High Performance File System.
The literature on both by its creators discusses the ways that these filesystems were organized so that the block allocation policy yielded better performance by trying to make consecutive file blocks contiguous. You can find discussions of this, and of the fact that the amount and location of free space left to allocate blocks affects block placement and thus performance, in the contemporary articles on the subject.
It should be fairly obvious, for example, from the description of the block allocation algorithm of the Berkeley FFS that, if there is no free space in the current and secondary cylinder group and the algorithm thus reaches the fourth level fallback ("apply an exhaustive search to all cylinder groups"), performance of allocating disc blocks will suffer as also will fragmentation of the file (and hence read performance).
It is these and similar analyses (these being far from the only filesystem designs that aimed to improve on the layout policies of the filesystem designs of the time) that the received wisdom of the past 30 years has built upon.
For example: The dictum in the original paper that FFS volumes be kept less than 90% full, lest performance suffer, which was based upon experiments made by the creators, can be found uncritically repeated even in books on Unix filesystems published this century (e.g., Pate2003 p. 216). Few people question this, although Amir H. Majidimehr actually did the century before, saying that xe has in practice not observed a noticeable effect; not least because of the customary Unix mechanism that reserves that final 10% for superuser use, meaning that a 90% full disc is effectively 100% full for non-superusers anyway (Majidimehr1996 p. 68). So did Bill Calkins, who suggests that in practice one can fill up to 99%, with 21st century disc sizes, before observing the performance effects of low free space because even 1% of modern size discs is enough to have lots of unfragmented free space still to play with (Calkins2002 p. 450).
This latter is an example of how received wisdom can become wrong. There are other examples of this. Just as the SCSI and ATA worlds of logical block addressing and zoned bit recording rather threw out of the window all of the careful calculations of rotational latency in the BSD filesystem design, so the physical mechanics of SSDs rather throw out of the window the free space received wisdom that applies to Winchester discs.
With SSDs, the amount of free space on the device as a whole, i.e., across all volumes on the disc and in between them, has an effect both upon performance and upon lifetime. And the very basis for the idea that a file needs to be stored in blocks with contiguous logical block addresses is undercut by the fact that SSDs do not have platters to rotate and heads to seek. The rules change again.
With SSDs, the recommended minimum amount of free space is actually more than the traditional 10% that comes from experiments with Winchester discs and Berkeley FFS 33 years ago. Anand Lal Shimpi gives 25%, for example. This difference is compounded by the fact that this has to be free space across the entire device, whereas the 10% figure is within each single FFS volume, and thus is affected by whether one's partitioning program knows to TRIM all of the space that is not allocated to a valid disc volume by the partition table.
It is also compounded by complexities such as TRIM-aware filesystem drivers that can TRIM free space within disc volumes, and the fact that SSD manufacturers themselves also already allocate varying degrees of reserved space that is not even visible outwith the device (i.e., to the host) for various uses such as garbage collection and wear levelling.
- Marshall K. McKusick, William N. Joy, Samuel J. Leffler, and Robert S. Fabry (1984-08). A Fast File System for UNIX. ACM Transactions on Computer Systems. Volume 2 issue 3. pp.181–197. Archived at cornell.edu.
- Ray Duncan (1989-09). Design goals and implementation of the new High Performance File System. Microsoft Systems Journal. Volume 4 issue 5. pp. 1–13. Archived at wisc.edu.
- Marshall Kirk McKusick, Keith Bostic, Michael J. Karels, and John S. Quarterman (1996-04-30). "The Berkeley Fast Filesystem". The Design and Implementation of the 4.4 BSD Operating System. Addison-Wesley Professional. ISBN 0201549794.
- Dan Bridges (1996-05). Inside the High Performance File System — Part 4: Fragmentation, Diskspace Bitmaps and Code Pages. Significant Bits. Archived at Electronic Developer Magazine for OS/2.
- Keith A. Smith and Margo Seltzer (1996). A Comparison of FFS Disk Allocation Policies. Proceedings of the USENIX Annual Technical Conference. Archived at harvard.edu.
- Steve D. Pate (2003). "Performance analysis of the FFS". UNIX Filesystems: Evolution, Design, and Implementation. John Wiley amp; Sons. ISBN 9780471456759.
- Amir H. Majidimehr (1996). Optimizing UNIX for Performance. Prentice Hall. ISBN 9780131115514.
- Bill Calkins (2002). "Managing File Systems". Inside Solaris 9. Que Publishing. ISBN 9780735711013.
- Anand Lal Shimpi (2012-10-04). Exploring the Relationship Between Spare Area and Performance Consistency in Modern SSDs. AnandTech.
- Henry Cook, Jonathan Ellithorpe, Laura Keys, and Andrew Waterman (2010). IotaFS: Exploring File System Optimizations for SSDs. IEEE Transactions on Consumer Electronics. Archived at stanford.edu.
- https://superuser.com/a/1081730/38062
- Accela Zhao (2017-04-10). A Summary on SSD & FTL. github.io.
- Does Windows trim unpartitioned (unformatted) space on an SSD?
No comments:
Post a Comment