How to Shrink an LVM Volume Safely

Logical Volume Management is a vast improvement over standard partitioning schemes. Among many other things, it allows you to decrease the size of a volume without recreating it completely. Here’s how.

First, as is always the case when you’re modifying disk volumes, partitions, or file systems, you should really have a recent backup. A typo in one the following commands could easily destroy data. You have been warned!

All of the required steps must be performed on an unmounted volume. If want to reduce the size of a non-root volume, simply unmount it. For a root volume, you’ll have to boot from a CD. Any modern live or rescue CD should work fine. I prefer SystemRescueCD. It includes almost any disk management programs you might need. After booting from a CD, you may have to issue:

# vgchange -a y

This makes any logical volumes available to the Linux. Most boot CD’s will do it automatically some time during the boot process, but repeating the command won’t hurt. Next, force a file system check on the volume in question:

# e2fsck -f /dev/polar/root

Device names for LVM volumes follow the convention: /dev/<volume group>/<logical volume>. In this case, my volume group is named polar and the volume I’m going to shrink is named root. This is a critical step; resizing a file system in an inconsistent state could have disastrous consequences. Next, resize the actual file system:

# resize2fs /dev/polar/root 180G

Replace 180G with about 90% of the size you want the final volume to be. For example, in this case, I want the final volume to be 200 gigabytes, so I’ll reduce the file system to 180 gigabytes. Why is this necessary? When we reduce the size of the actual volume in the next step, it’s critical that the new size is greater than or equal to the size of the file system. After reading the documentation for both resizefs and lvreduce, I still haven’t been able to find out whether they’re using standard computer gigabytes (1024^3 bytes) or drive manufacturer gigabytes (1000^3 bytes). In this case, the difference is very important. To be on the safe side, we’ll just shrink the file system a bit more than necessary and expand it to use the full space available later. Next, reduce the size of the logical volume:

# lvreduce -L 200G /dev/polar/root

In this case, use the actual size you want to the volume to be. Finally, grow the file system so that it uses all available space on the logical volume:

# resize2fs /dev/polar/root

That’s it. Enjoy your newly acquired free space.

52 thoughts on “How to Shrink an LVM Volume Safely”

  1. Great tutorial!
    Be aware to unmount volume and be certain that the command doesn’t give you errors, otherwise you destroy the filesystem! By expirience :.(

  2. If you are not careful, you can shrink the LV to a size less than what is required by the file system. If the LV is resized smaller than what the file system has been resized to, things will go very badly for you. Did I mention you should backup your data before hand?

  3. # lvreduce -L 200G /dev/polar/root
    Under RHEL5.6 this command is like wrong turn. Beside reducing the size its make the LV size 200M, Dangreous for real experiment.
    This is a right comman which reduce the size.

    # lvreduce -L -200G /dev/polar/root

      1. # lvreduce -L 200G /dev/polar/root <- makes the partition size equals to 200G

        # lvreduce -L -200G /dev/polar/root <- reduces the partition size by 200G

  4. Great article, thanks. The only thing missing is a statement on how long the shrink might take. I’m shrinking a 1T filesystem down to 128G right now and the resize2fs took about 15 minutes.

    The lvreduce after that was instantaneous (which I expected it to be). That was to bring it down to 256G.

    Then I expanded the filesystem back up to fill out the logical volume (256G) and that command took only a few seconds.

  5. After reading the documentation for both resizefs and lvreduce, I still haven’t been able to find out whether they’re using standard computer gigabytes (1024^3 bytes) or drive manufacturer gigabytes (1000^3 bytes)

    well, neither of those.
    Filesystem uses blocks and block size, while LVM uses Physical Extents and PE size.

    Once you’ve shrunk the filesystem, run e.g. tune2fs to see the filesystem size:
    block count * block size (Bytes)
    the result divided by PE Size (and perhaps rounded up), gives you a number of PE that a logical volume can be reduced to.

    an example:
    my fs size after shrank: 2.5T resides on Logical Volume of 6.86T
    tune2fs shows:
    Block counts 671 088 640
    Block size 4096
    It means, I have 671 088 640 * 4096 = 2 748 779 069 440 Bytes
    which is 2 684 354 560 KBytes (2 748 779 069 440 / 1024)
    pvdisplay shows for the PV where my LV is located:
    PE Size (KByte) 4096
    which means I have to reduce LV size down to
    fs size / PE Size
    which is
    2 684 354 560 / 4096 = 655360
    thus I run
    lvreduce –extents 655360

    anyway, lvreduce -r is much easier :-)

  6. Thank you for this great guide.

    Ill just update it (it is 5 years old).
    Some distributions (I know of Ubuntu and probably Debian) don’t have-

  7. Hi,

    I had a LVM with 450 GB. I was able to reduce it to 200 with the above command. When I rebooted the system I was able to see the change. Now my questions is what happens to the remaining 250 GB space how do I get it in to a different file system

  8. I wasn’t able to mount a FS (dev/appsvg/apps). I got the following error

    EXT4-fs (dm-2): bad geometry: block count 9171968 exceeds size of device (9018368 blocks)

    So, I tried to reduce the logical volume size. This is how it looked before any changes made.

    LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
    apps appsvg -wi-a— 34.40g

    So, I did the following:
    1. vgchange -a y (worked)
    2. e2fsck -f /dev/appsvg/apps (no issues)
    3. resize2fs /dev/appsvg/apps 34.2G (did not work, gave an error)
    4. lvreduce -L34.3G /dev/appsvg/apps (no issues)
    5. resize2fs /dev/appsvg/apps (no issues)

    Then I was able to mount /apps without any trouble.

    What was the problem with step (3) ? any ideas ?

    Thanks !

    1. Most likely there’s a mismatch between power of 2 gigabytes and SI style gigabytes. resize2fs uses power of 2 gigabytes. It might be easier to work in terms of the number of blocks. You can get block count and size via tune2fs -l /dev/appsvg/apps

  9. Having just googled around, suggests that the LVM tools use 1024 as their base unit, not 1000:

    When specifying units in a command line argument, LVM is case-insensitive; specifying M or m is equivalent, for example, and powers of 2 (multiples of 1024) are used. However, when specifying the –units argument in a command, lower-case indicates that units are in multiples of 1024 while upper-case indicates that units are in multiples of 1000.

    1. Thanks. Good to know. Although I think it’s probably still safer to just reduce the size of the file system further than you need and grow it at the end as that’s the one step where you can actually lose data.

  10. Thanks for this posting.

    resize2fs will take a *very* long time (20+ hours) for a filesystem with many files (millions).

    I strongly suggest to use the ‘-p’ flag of resize2fs, then you’ll see a progress bar during the shrinking process. Otherwise you’ll get *no* feedback during the whole process.

    Maybe you want to update this posting to use the ‘-p’ flag.

  11. just look @ man lvreduce and lvextend.

    thos do (online) filesystem resizing when using -r flag.
    also they do filesystemcheck if necessary. Never was so easy.

    Also on creat5ing logical Volumes you shoud keep a little space unassigned so you can easily extend lvs when needed.

  12. hai i reduced the logical volume but i am unable to see the changes in my mount point but my lv size is reduced…….
    i had done the steps in this way
    2.tune2fs and fsck
    but it was showing as the past value of the mount when i checked it in df -h.
    can u please solve me this……

  13. Here is a suggestion that I haven’t tried out. When looking at the man pages for resize2fs, it listed a “-M” option. If this works, then instead of specifying a size for resize2fs it would automatically reduce down as small as possible.

    Thanks for the tutorial BTW. I’m in the process of doing my first resize2fs step right now. I’m shrinking at 12 TB filesystem to 7.5 TB, so it is taking a while.

  14. For lvresize and lvreduce size may be specified either in 1024 (iB) or 1000 (B) units.
    Upper case (K,M,G,T) means 1000 based, whereas (k,m,g,t) means 1024 based.

    Example: lvresize -L -100G /dev/mapper/… means GB (1 000 000 000 bytes), while lvresize -L -100g /dev/mapper/… means GiB ( 1 073 741 824 bytes).

    Other utilities (resize2fs, df) use GiB based units, same for lower/upper case letters.

    Therefore, exact match will be:
    resize2fs /dev/mapper/… 100G
    lvresize -L 100g /dev/mapper/…

  15. I found that it was useful to use lvdisplay –units g (lowercase g), as apparently that seems to match what is meant by resizing volumes by amounts using numbers with uppercase G.

    My volume initially said 2.00TiB, when I used –units G it said 2011.02 GB, then when I used –units g it said 2048GiB. When I tried to reduce my partition to “2100G”, it told me that was bigger than the space available, so naturally by process of elimination, resizing to 2100G must mean 2100GiB, meaning I should have instead specified something more like “1900G”.


    1. Correction, when I said “2011.02 GB” I meant to say 2199.02 GB, and the dashes are supposed to be double dashes, but this blog apparently strips that (undoubtedly as a crude attempt at stopping SQL injection)

  16. >>14.MJ says: December 9, 2013 at 10:50 AM

    use resize4fs for ext4 on RH5 (kernel thing); on RH6 “resize2fs” and “resize4fs” is the same

  17. I can confirm for K-Ubuntu 14.04. that the G-suffix for gigabytes works the same way for resize2fs and lvreduce:

    Proof: (in German)
    [email protected]:/dev/mapper# resize2fs /dev/kubuntu-vg/root 25G
    resize2fs 1.42.9 (4-Feb-2014)
    Die Grösse des Dateisystems auf /dev/kubuntu-vg/root wird auf 6553600 (4k) Blöcke geändert.

    Das Dateisystem auf /dev/kubuntu-vg/root ist nun 6553600 Blöcke groß.

    [email protected]:/dev/mapper# resize2fs /dev/kubuntu-vg/root 20G
    resize2fs 1.42.9 (4-Feb-2014)
    Die Grösse des Dateisystems auf /dev/kubuntu-vg/root wird auf 5242880 (4k) Blöcke geändert.

    Das Dateisystem auf /dev/kubuntu-vg/root ist nun 5242880 Blöcke groß.

    [email protected]:/dev/mapper# lvreduce -L 25G /dev/kubuntu-vg/root
    WARNING: Reducing active logical volume to 25,00 GiB
    THIS MAY DESTROY YOUR DATA (filesystem etc.)
    Do you really want to reduce root? [y/n]: y
    Reducing logical volume root to 25,00 GiB
    Logical volume root successfully resized
    [email protected]:/dev/mapper# resize2fs /dev/kubuntu-vg/root
    resize2fs 1.42.9 (4-Feb-2014)
    Die Grösse des Dateisystems auf /dev/kubuntu-vg/root wird auf 6553600 (4k) Blöcke geändert.

    Das Dateisystem auf /dev/kubuntu-vg/root ist nun 6553600 Blöcke groß.
    [email protected]:/dev/mapper# lvreduce –version
    LVM version: 2.02.98(2) (2012-10-15)
    Library version: 1.02.77 (2012-10-15)
    Driver version: 4.27.0

  18. Hello, an alternate method to shrink a logical volume filesystem based on the example you gave is as follows:

    lvreduce -L200G -r /dev/polar/root.

    The ‘-r’ flag initiates the resize2fs command.

  19. This also works on the “restore” command line of the Ubuntu stable image (14.04.3) –most people probably wouldn’t know how to access it but Ctrl-Alt-F2 from any screen will get you there. Very safe and very efficient. Thanks!

  20. I love this guide. I was able to reduce the lvm partition on my fedora 25 workstation. Now I’m going to install Windows 10 on the free space just to mess around a bit ?

  21. Thx Man!

    I have been looking for this information for 2 days! All other guides I tried screwed up my disk. I wanted to reduce a Ubuntu Server 16.04 with LVM root partition. I stupidly first created the partition to be 60GB. Then when I wanted to imported it to my VMware ESXi server, I found out that less than 6GB were really used. I tough importing it with thin provision would shrink the partition but it stayed 60GB even tough it only used 6G. I used your guide and was able to bring it back to 10G. Thanks again!

  22. df -h doesn’t give me the right size of usage and unusage of LVM volume which has been mounted, luckily resize2fs doesn’t cause data loss, it reports the size is less than the existing data or similar. And finally after some tryings I have to use 1.08 times more than the df output to bit the resize2fs unlike 0.9 times as you say. So how do I get the usage of volume in proper way?

  23. After resizing, I still can not find any Free PE, although the Partition size has come down. How can I see the lost free space and use it?

  24. Probably worth adding the -p for progress option, as resizing several TB can take a while, and it’s good to know if you can have a kip in the meanwhile…

Leave a Reply

Your email address will not be published.