Sometimes you’ll see on hard drives some inconsistencies with size, used value and the available space. The problem occurs when the operating system reports something different of what we locally we expect. For example:
:~# df -h | grep data Filesystem Size Used Avail Use% Mounted on /dev/sdb1 788G 726G 23G 98% /var/data
If you do the math it should be 788 GB – 726 GB = 62 GB…but df (I mean Linux) reports there are 23 GB available. Where are the remaining 39 GB?
The issue is the available space. Available for whom? For the operating system, users, etc. But it turns out that on ext2, ext3 and ext4 file systems there is a 5% of space reserved by default to avoid the system to collapse, for example due to logs or data bases making it in-operating, so root can use that 5% to rescue it. Thus, checking the size of that partition we can see that 5% of 788 GB is 39.5 GB. Here is the difference.
Now, how can we use or take advantage of those extra GBs, because if a partition is used just for data those 5% for the operating system makes no sense except for the root / or /var partitions,. There are two ways:
When the partition’s file system is created you can use the -m option in the mkfs.ext[2,3,4] command. example:
:~# mkfs.ext3 -m 0 /dev/sdb1
If the filesystem was already created, you can use the -m option with the tune2fs command. But here it’s important to umount the partition first to avoid data loss. Example:
:~# umount /var/data :~# tune2fs -m 0 /dev/sdb1 tune2fs 1.41.12 (17-May-2010) Setting reserved blocks percentage to 0% (0 blocks) :~# mount /var/data
Then check the available space one more time:
:~# df -h | grep data Filesystem Size Used Avail Use% Mounted on /dev/sdb1 788G 726G 63G 93% /var/data
Now the space available correspond to what we logically expected…by the way, how we can check the reserved space without using df? There are to ways:
:~# dumpe2fs -h /dev/sdb1 | grep Reserved dumpe2fs 1.41.12 (17-May-2010) Reserved block count: 0
:~# tune2fs -l /dev/sdb1 | grep Reserved Reserved block count: 0