Jay Taylor's notes
back to listing indexDoes LVM eats my disk space or does df lie?
[web search]Let us do some research. I have noticed that difference before, but never checked in detail what to attribute the losses to. Have a look at my scenario for comparision: fdisk shows the following partition:
/dev/sda3 35657728 1000214527 964556800 460G 83 Linux
There will be some losses as I my filesystem lives in a luks container, but that should only be a few MiB. df shows:
Filesystem Size Used Avail Use% Mounted on
/dev/dm-1 453G 373G 58G 87% /
(The luks container is also why /dev/sda3 does not match /dev/dm-1, but they are really the same device, with encryption inbetween, no LVM. This also shows that LVM is not responsible for your losses, I have them too.)
Now lets ask the filesystem itself on that matter. Calling tune2fs -l
, which outputs a lot of interesting information about ext-family filesystems, we get:
root@altair ~ › tune2fs -l /dev/dm-1
tune2fs 1.42.12 (29-Aug-2014)
Filesystem volume name: <none>
Last mounted on: /
Filesystem UUID: 0de04278-5eb0-44b1-9258-e4d7cd978768
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 30146560
Block count: 120569088
Reserved block count: 6028454
Free blocks: 23349192
Free inodes: 28532579
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 995
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
Filesystem created: Wed Oct 14 09:27:52 2015
Last mount time: Sun Mar 13 12:25:50 2016
Last write time: Sun Mar 13 12:25:48 2016
Mount count: 23
Maximum mount count: -1
Last checked: Wed Oct 14 09:27:52 2015
Check interval: 0 (<none>)
Lifetime writes: 1426 GB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
First orphan inode: 26747912
Default directory hash: half_md4
Directory Hash Seed: 4723240b-9056-4f5f-8de2-d8536e35d183
Journal backup: inode blocks
Glancing over it, the first which springs into your eyes should be Reserved blocks
. Multiplying that with the Block size
(also from the output), we get the difference between the df Used+Avail and Size:
453GiB - (373GiB+58GiB) = 22 GiB
6028454*4096 Bytes = 24692547584 Bytes ~= 23 GiB
Close enough, especially considering that df rounds (using df without -h and repeating the calculation leaves only 16 MiB of the difference between Used+Avail and Size unexplained). To whom the reserved blocks are reserved is also written in the tune2fs output. It is root. This is a safety-net to ensure that non-root users cannot make the system entirely unusable by filling the disk, and keeping a few percent of disk space unused also helps against fragmentation.
Now for the difference between the size reported by df and the size of the partition. This can be explained by taking a look at the inodes. ext4 preallocates inodes, so that space is unusable for file data. Multiply the Inode count
by the Inode size
, and you get:
30146560*256 Bytes = 7717519360 Bytes ~= 7 GiB
453 GiB + 7 GiB = 460 GiB
Inodes are basically directory entries. Let us ask mkfs.ext4 about details (from man mkfs.ext4):
-i
bytes-per-inode
Specify the bytes/inode ratio. mke2fs creates an inode for every
bytes-per-inode
bytes of space on the disk. The larger thebytes-per-inode
ratio, the fewer inodes will be created. This value generally shouldn't be smaller than the blocksize of the filesystem, since in that case more inodes would be made than can ever be used. Be warned that it is not possible to change this ratio on a filesystem after it is created, so be careful deciding the correct value for this parameter. Note that resizing a filesystem changes the numer of inodes to maintain this ratio.
There are different presets to use for different scenarios. On a file server with lots of linux distribution images, it makes sense to pass e.g. -T largefile
or even -T largefile4
. What -T
means is defined in /etc/mke2fs.conf
, in those examples and on my system:
largefile = {
inode_ratio = 1048576
}
largefile4 = {
inode_ratio = 4194304
}
So with -T largefile4
, the number of is much less than the default (the default ratio is 16384 in my /etc/mke2fs.conf
). This means, less space reserved for directory entries, and more space for data. When you run out of inodes, you cannot create new files. Increasing the number of inodes in an existing filesystem does not seem to be possible. Thus, the default number of inodes is rather conservatively chosen to ensure that the average user does not run out of inodes prematurely.
I just figured that out at poking at my numbers, let me know if it (does not) work for you ☺.
tune2fs
info). Reserved blocks: 2897612 * 4096 = ~11G (expected 218-(66+142)=10G) - OK. Inodes: 14491648 * 256 = ~3.5G (expected 232-218=14G) - NOT OK, there is still no 10.5G. But I'm sure tune2fs
output has information explaining that. I'll try to analyze it more closely later.
– gumkins
Mar 16 '16 at 11:16
resize2fs /dev/mapper/mint--vg-root
, it will detect the size of the volume and act accordingly (i.e. if you did that in the past, it will just tell you "Nothing to do", otherwise it will grow the ext4 to the volumes size). Growing a ext4 filesystem works inplace and online.
– Jonas Schäfer
Mar 16 '16 at 11:40
An easy place to check would be in the logical volume (which does not have to be as large as the physical volume). Use lvdisplay
to see the size.
If it does not show a difference there, the usual explanation is that there is space reserved for use by root which does not show in a df
by a normal user.
Further reading:
lvdisplay
info but it hasn't helped me to understand why df
reports that there is much less free space than it must be (-24GB).
– gumkins
Mar 16 '16 at 1:35
Your Answer
Not the answer you're looking for? Browse other questions tagged linux disk-usage lvm or ask your own question.
asked |
3 years, 2 months ago |
viewed |
7,094 times |
active |
-
13
-
5
-
6
Linked
Related
Hot Network Questions
-
How to forge a multi-part weapon?
-
How to tell your grandparent to not come to fetch you with their car?
-
How to officially communicate to a non-responsive colleague?
-
At what point in time did Dumbledore ask Snape for this favor?
-
What is the actual quality of machine translations?
-
How does an ordinary object become radioactive?
-
How to hide an urban landmark?
-
What is the highest possible permanent AC at character creation?
-
Inconsistent behavior of compiler optimization of unused string
-
What does the term "railed" mean in signal processing?
-
How to return a security deposit to a tenant
-
Taxi Services at Didcot
-
Frame failure sudden death?
-
Why doesn't Adrian Toomes give up Spider-Man's identity?
-
What's up with this leaf?
-
Confusion about off peak timings of London trains
-
What makes an item an artifact?
-
Why is one of Madera Municipal's runways labelled with only "R" on both sides?
-
When conversion from Integer to Single may lose precision
-
Why did the Herschel Space Telescope need helium coolant?
-
How can electric fields be used to detect cracks in metals?
-
What's the name of this light airplane?
-
How can I tell the difference between unmarked sugar and stevia?
-
Sources for Greek philosophy being influenced by Judaism
site design / logo © 2019 Stack Exchange Inc; user contributions licensed under cc by-sa 3.0 with attribution required. rev 2019.6.4.33883
This site is not affiliated with Linus Torvalds or The Open Group in any way.