Jay Taylor's notes

back to listing index

Does LVM eats my disk space or does df lie?

[web search]
Original source (unix.stackexchange.com)
Tags: linux disk-space high-disk-usage lvm linux-volume-manager unix.stackexchange.com
Clipped on: 2019-06-04

My guess would be swap space grep swap /etc/fstab Can you also paste the output of lvdisplay ? – Jarrod Mar 16 '16 at 1:21
  • Hello Jarrod, there is no swap partition. Please see "UPDATE" section of the question. – gumkins Mar 16 '16 at 1:32
  • 4

    Let us do some research. I have noticed that difference before, but never checked in detail what to attribute the losses to. Have a look at my scenario for comparision: fdisk shows the following partition:

    /dev/sda3       35657728 1000214527 964556800  460G 83 Linux
    

    There will be some losses as I my filesystem lives in a luks container, but that should only be a few MiB. df shows:

    Filesystem      Size  Used Avail Use% Mounted on
    /dev/dm-1       453G  373G   58G  87% /
    

    (The luks container is also why /dev/sda3 does not match /dev/dm-1, but they are really the same device, with encryption inbetween, no LVM. This also shows that LVM is not responsible for your losses, I have them too.)

    Now lets ask the filesystem itself on that matter. Calling tune2fs -l, which outputs a lot of interesting information about ext-family filesystems, we get:

    root@altair ~ › tune2fs -l /dev/dm-1
    tune2fs 1.42.12 (29-Aug-2014)
    Filesystem volume name:   <none>
    Last mounted on:          /
    Filesystem UUID:          0de04278-5eb0-44b1-9258-e4d7cd978768
    Filesystem magic number:  0xEF53
    Filesystem revision #:    1 (dynamic)
    Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
    Filesystem flags:         signed_directory_hash 
    Default mount options:    user_xattr acl
    Filesystem state:         clean
    Errors behavior:          Continue
    Filesystem OS type:       Linux
    Inode count:              30146560
    Block count:              120569088
    Reserved block count:     6028454
    Free blocks:              23349192
    Free inodes:              28532579
    First block:              0
    Block size:               4096
    Fragment size:            4096
    Reserved GDT blocks:      995
    Blocks per group:         32768
    Fragments per group:      32768
    Inodes per group:         8192
    Inode blocks per group:   512
    Flex block group size:    16
    Filesystem created:       Wed Oct 14 09:27:52 2015
    Last mount time:          Sun Mar 13 12:25:50 2016
    Last write time:          Sun Mar 13 12:25:48 2016
    Mount count:              23
    Maximum mount count:      -1
    Last checked:             Wed Oct 14 09:27:52 2015
    Check interval:           0 (<none>)
    Lifetime writes:          1426 GB
    Reserved blocks uid:      0 (user root)
    Reserved blocks gid:      0 (group root)
    First inode:              11
    Inode size:           256
    Required extra isize:     28
    Desired extra isize:      28
    Journal inode:            8
    First orphan inode:       26747912
    Default directory hash:   half_md4
    Directory Hash Seed:      4723240b-9056-4f5f-8de2-d8536e35d183
    Journal backup:           inode blocks
    

    Glancing over it, the first which springs into your eyes should be Reserved blocks. Multiplying that with the Block size (also from the output), we get the difference between the df Used+Avail and Size:

    453GiB - (373GiB+58GiB) = 22 GiB
    6028454*4096 Bytes = 24692547584 Bytes ~= 23 GiB
    

    Close enough, especially considering that df rounds (using df without -h and repeating the calculation leaves only 16 MiB of the difference between Used+Avail and Size unexplained). To whom the reserved blocks are reserved is also written in the tune2fs output. It is root. This is a safety-net to ensure that non-root users cannot make the system entirely unusable by filling the disk, and keeping a few percent of disk space unused also helps against fragmentation.

    Now for the difference between the size reported by df and the size of the partition. This can be explained by taking a look at the inodes. ext4 preallocates inodes, so that space is unusable for file data. Multiply the Inode count by the Inode size, and you get:

    30146560*256 Bytes = 7717519360 Bytes ~= 7 GiB
    453 GiB + 7 GiB = 460 GiB
    

    Inodes are basically directory entries. Let us ask mkfs.ext4 about details (from man mkfs.ext4):

    -i bytes-per-inode

    Specify the bytes/inode ratio. mke2fs creates an inode for every bytes-per-inode bytes of space on the disk. The larger the bytes-per-inode ratio, the fewer inodes will be created. This value generally shouldn't be smaller than the blocksize of the filesystem, since in that case more inodes would be made than can ever be used. Be warned that it is not possible to change this ratio on a filesystem after it is created, so be careful deciding the correct value for this parameter. Note that resizing a filesystem changes the numer of inodes to maintain this ratio.

    There are different presets to use for different scenarios. On a file server with lots of linux distribution images, it makes sense to pass e.g. -T largefile or even -T largefile4. What -T means is defined in /etc/mke2fs.conf, in those examples and on my system:

    largefile = {
        inode_ratio = 1048576
    }
    largefile4 = {
        inode_ratio = 4194304
    }
    

    So with -T largefile4, the number of is much less than the default (the default ratio is 16384 in my /etc/mke2fs.conf). This means, less space reserved for directory entries, and more space for data. When you run out of inodes, you cannot create new files. Increasing the number of inodes in an existing filesystem does not seem to be possible. Thus, the default number of inodes is rather conservatively chosen to ensure that the average user does not run out of inodes prematurely.

    I just figured that out at poking at my numbers, let me know if it (does not) work for you ☺.

    answered Mar 16 '16 at 9:08
    Hello, Jonas! First of all thank you for your efforts in doing this research! Though I've accepted your answer there is still some gap (I've updated my question with tune2fs info). Reserved blocks: 2897612 * 4096 = ~11G (expected 218-(66+142)=10G) - OK. Inodes: 14491648 * 256 = ~3.5G (expected 232-218=14G) - NOT OK, there is still no 10.5G. But I'm sure tune2fs output has information explaining that. I'll try to analyze it more closely later. – gumkins Mar 16 '16 at 11:16
  • @gumkins You mentioned that you resized the root LVM. Did you also run resize2fs? It is safe to run resize2fs /dev/mapper/mint--vg-root, it will detect the size of the volume and act accordingly (i.e. if you did that in the past, it will just tell you "Nothing to do", otherwise it will grow the ext4 to the volumes size). Growing a ext4 filesystem works inplace and online. – Jonas Schäfer Mar 16 '16 at 11:40
  • @gumkins Or see whether your Block Count times Block Size is approximately equal to the size of the logical volume you are using. Here, it is equal up to 4 kiB (which I’d attribute to LUKS and which matches the Payload Offset value in the LUKS header). If the Block Count times Block Size is not equal and resize2fs does not do anything, I’m really out of ideas, because I’d assume that the Block Count would cover everything the ext4 knows about. – Jonas Schäfer Mar 16 '16 at 11:53
  • @gumkins (Sorry for the comment spam) I only just now realised you now have your tune2fs output in your question. The block count really indicates that ~10 GiB (it evaluates to ~221 GiB) are missing. Definitely try resize2fs. – Jonas Schäfer Mar 16 '16 at 11:55
  • You were right! Please see UPDATE3. Thank you very much! – gumkins Mar 16 '16 at 13:49
  • 0

    An easy place to check would be in the logical volume (which does not have to be as large as the physical volume). Use lvdisplay to see the size.

    If it does not show a difference there, the usual explanation is that there is space reserved for use by root which does not show in a df by a normal user.

    Further reading:

    Hello, Thomas, I've updated my question with lvdisplay info but it hasn't helped me to understand why df reports that there is much less free space than it must be (-24GB). – gumkins Mar 16 '16 at 1:35

    Your Answer

    community wiki

    Not the answer you're looking for? Browse other questions tagged or ask your own question.

    asked

    3 years, 2 months ago

    viewed

    7,094 times

    active

    3 years, 2 months ago

    Hot Network Questions

    Unix & Linux
    Company

    site design / logo © 2019 Stack Exchange Inc; user contributions licensed under cc by-sa 3.0 with attribution required. rev 2019.6.4.33883

    Linux is a registered trademark of Linus Torvalds. UNIX is a registered trademark of The Open Group.

    This site is not affiliated with Linus Torvalds or The Open Group in any way.