Categories
Linux

How to Shrink an LVM Volume Safely

Logical Volume Management is a vast improvement over standard partitioning schemes. Among many other things, it allows you to decrease the size of a volume without recreating it completely. Here’s how.

First, as is always the case when you’re modifying disk volumes, partitions, or file systems, you should really have a recent backup. A typo in one the following commands could easily destroy data. You have been warned!

All of the required steps must be performed on an unmounted volume. If want to reduce the size of a non-root volume, simply unmount it. For a root volume, you’ll have to boot from a CD. Any modern live or rescue CD should work fine. I prefer SystemRescueCD. It includes almost any disk management programs you might need. After booting from a CD, you may have to issue:

# vgchange -a y

This makes any logical volumes available to the Linux. Most boot CD’s will do it automatically some time during the boot process, but repeating the command won’t hurt. Next, force a file system check on the volume in question:

# e2fsck -f /dev/polar/root

Device names for LVM volumes follow the convention: /dev//. In this case, my volume group is named polar and the volume I’m going to shrink is named root. This is a critical step; resizing a file system in an inconsistent state could have disastrous consequences. Next, resize the actual file system:

# resize2fs /dev/polar/root 180G

Replace 180G with about 90% of the size you want the final volume to be. For example, in this case, I want the final volume to be 200 gigabytes, so I’ll reduce the file system to 180 gigabytes. Why is this necessary? When we reduce the size of the actual volume in the next step, it’s critical that the new size is greater than or equal to the size of the file system. After reading the documentation for both resizefs and lvreduce, I still haven’t been able to find out whether they’re using standard computer gigabytes (1024^3 bytes) or drive manufacturer gigabytes (1000^3 bytes). In this case, the difference is very important. To be on the safe side, we’ll just shrink the file system a bit more than necessary and expand it to use the full space available later. Next, reduce the size of the logical volume:

# lvreduce -L 200G /dev/polar/root

In this case, use the actual size you want to the volume to be. Finally, grow the file system so that it uses all available space on the logical volume:

# resize2fs /dev/polar/root

That’s it. Enjoy your newly acquired free space.

54 replies on “How to Shrink an LVM Volume Safely”

Great tutorial!
Be aware to unmount volume and be certain that the command doesn’t give you errors, otherwise you destroy the filesystem! By expirience :.(

If you are not careful, you can shrink the LV to a size less than what is required by the file system. If the LV is resized smaller than what the file system has been resized to, things will go very badly for you. Did I mention you should backup your data before hand?

# lvreduce -L 200G /dev/polar/root
Under RHEL5.6 this command is like wrong turn. Beside reducing the size its make the LV size 200M, Dangreous for real experiment.
This is a right comman which reduce the size.

# lvreduce -L -200G /dev/polar/root

What is the difference between:

# lvreduce -L 200G /dev/polar/root and
# lvreduce -L 200G /dev/polar/root ??

# lvreduce -L 200G /dev/polar/root <- makes the partition size equals to 200G

# lvreduce -L -200G /dev/polar/root <- reduces the partition size by 200G

hello,
lvreduce or lvextend can automaticaly resize fs with -r flag

lvreduce -r -L 200G /dev/polar/root

have fun

Perhaps in an ideal world, but everyone seems to just use the term gigabyte sometimes referring to a base 10 size and sometimes referring to a base 2 size.

Great article, thanks. The only thing missing is a statement on how long the shrink might take. I’m shrinking a 1T filesystem down to 128G right now and the resize2fs took about 15 minutes.

The lvreduce after that was instantaneous (which I expected it to be). That was to bring it down to 256G.

Then I expanded the filesystem back up to fill out the logical volume (256G) and that command took only a few seconds.

I was shrinking ext3 fs, from 6.9T down to 2.5T with 1.9T data on it, and resize2fs took ca. 20hrs

After reading the documentation for both resizefs and lvreduce, I still haven’t been able to find out whether they’re using standard computer gigabytes (1024^3 bytes) or drive manufacturer gigabytes (1000^3 bytes)

well, neither of those.
Filesystem uses blocks and block size, while LVM uses Physical Extents and PE size.

Once you’ve shrunk the filesystem, run e.g. tune2fs to see the filesystem size:
block count * block size (Bytes)
the result divided by PE Size (and perhaps rounded up), gives you a number of PE that a logical volume can be reduced to.

an example:
my fs size after shrank: 2.5T resides on Logical Volume of 6.86T
tune2fs shows:
Block counts 671 088 640
Block size 4096
It means, I have 671 088 640 * 4096 = 2 748 779 069 440 Bytes
which is 2 684 354 560 KBytes (2 748 779 069 440 / 1024)
pvdisplay shows for the PV where my LV is located:
PE Size (KByte) 4096
which means I have to reduce LV size down to
fs size / PE Size
which is
2 684 354 560 / 4096 = 655360
thus I run
lvreduce –extents 655360

anyway, lvreduce -r is much easier :-)

Thank you for this great guide.

Ill just update it (it is 5 years old).
Some distributions (I know of Ubuntu and probably Debian) don’t have-
/dev//
but
/dev/mapper/

Hi,

I had a LVM with 450 GB. I was able to reduce it to 200 with the above command. When I rebooted the system I was able to see the change. Now my questions is what happens to the remaining 250 GB space how do I get it in to a different file system

I wasn’t able to mount a FS (dev/appsvg/apps). I got the following error

EXT4-fs (dm-2): bad geometry: block count 9171968 exceeds size of device (9018368 blocks)

So, I tried to reduce the logical volume size. This is how it looked before any changes made.

LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
apps appsvg -wi-a— 34.40g

So, I did the following:
1. vgchange -a y (worked)
2. e2fsck -f /dev/appsvg/apps (no issues)
3. resize2fs /dev/appsvg/apps 34.2G (did not work, gave an error)
4. lvreduce -L34.3G /dev/appsvg/apps (no issues)
5. resize2fs /dev/appsvg/apps (no issues)

Then I was able to mount /apps without any trouble.

What was the problem with step (3) ? any ideas ?

Thanks !

Most likely there’s a mismatch between power of 2 gigabytes and SI style gigabytes. resize2fs uses power of 2 gigabytes. It might be easier to work in terms of the number of blocks. You can get block count and size via tune2fs -l /dev/appsvg/apps

Having just googled around, https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/4/html/Cluster_Logical_Volume_Manager/LVM_CLI.html#CLI_usage suggests that the LVM tools use 1024 as their base unit, not 1000:

When specifying units in a command line argument, LVM is case-insensitive; specifying M or m is equivalent, for example, and powers of 2 (multiples of 1024) are used. However, when specifying the –units argument in a command, lower-case indicates that units are in multiples of 1024 while upper-case indicates that units are in multiples of 1000.

Thanks. Good to know. Although I think it’s probably still safer to just reduce the size of the file system further than you need and grow it at the end as that’s the one step where you can actually lose data.

Thanks for this posting.

resize2fs will take a *very* long time (20+ hours) for a filesystem with many files (millions).

I strongly suggest to use the ‘-p’ flag of resize2fs, then you’ll see a progress bar during the shrinking process. Otherwise you’ll get *no* feedback during the whole process.

Maybe you want to update this posting to use the ‘-p’ flag.

just look @ man lvreduce and lvextend.

thos do (online) filesystem resizing when using -r flag.
also they do filesystemcheck if necessary. Never was so easy.

Also on creat5ing logical Volumes you shoud keep a little space unassigned so you can easily extend lvs when needed.

hai i reduced the logical volume but i am unable to see the changes in my mount point but my lv size is reduced…….
i had done the steps in this way
1.umount
2.tune2fs and fsck
3.lvreduce
4.remount
but it was showing as the past value of the mount when i checked it in df -h.
can u please solve me this……

Here is a suggestion that I haven’t tried out. When looking at the man pages for resize2fs, it listed a “-M” option. If this works, then instead of specifying a size for resize2fs it would automatically reduce down as small as possible.

Thanks for the tutorial BTW. I’m in the process of doing my first resize2fs step right now. I’m shrinking at 12 TB filesystem to 7.5 TB, so it is taking a while.

For lvresize and lvreduce size may be specified either in 1024 (iB) or 1000 (B) units.
Upper case (K,M,G,T) means 1000 based, whereas (k,m,g,t) means 1024 based.

Example: lvresize -L -100G /dev/mapper/… means GB (1 000 000 000 bytes), while lvresize -L -100g /dev/mapper/… means GiB ( 1 073 741 824 bytes).

Other utilities (resize2fs, df) use GiB based units, same for lower/upper case letters.

Therefore, exact match will be:
resize2fs /dev/mapper/… 100G
lvresize -L 100g /dev/mapper/…

I found that it was useful to use lvdisplay –units g (lowercase g), as apparently that seems to match what is meant by resizing volumes by amounts using numbers with uppercase G.

My volume initially said 2.00TiB, when I used –units G it said 2011.02 GB, then when I used –units g it said 2048GiB. When I tried to reduce my partition to “2100G”, it told me that was bigger than the space available, so naturally by process of elimination, resizing to 2100G must mean 2100GiB, meaning I should have instead specified something more like “1900G”.

Confusing!

Correction, when I said “2011.02 GB” I meant to say 2199.02 GB, and the dashes are supposed to be double dashes, but this blog apparently strips that (undoubtedly as a crude attempt at stopping SQL injection)

>>14.MJ says: December 9, 2013 at 10:50 AM

use resize4fs for ext4 on RH5 (kernel thing); on RH6 “resize2fs” and “resize4fs” is the same

I can confirm for K-Ubuntu 14.04. that the G-suffix for gigabytes works the same way for resize2fs and lvreduce:

Proof: (in German)
root@Tigersclaw:/dev/mapper# resize2fs /dev/kubuntu-vg/root 25G
resize2fs 1.42.9 (4-Feb-2014)
Die Grösse des Dateisystems auf /dev/kubuntu-vg/root wird auf 6553600 (4k) Blöcke geändert.

Das Dateisystem auf /dev/kubuntu-vg/root ist nun 6553600 Blöcke groß.

root@Tigersclaw:/dev/mapper# resize2fs /dev/kubuntu-vg/root 20G
resize2fs 1.42.9 (4-Feb-2014)
Die Grösse des Dateisystems auf /dev/kubuntu-vg/root wird auf 5242880 (4k) Blöcke geändert.

Das Dateisystem auf /dev/kubuntu-vg/root ist nun 5242880 Blöcke groß.

root@Tigersclaw:/dev/mapper# lvreduce -L 25G /dev/kubuntu-vg/root
WARNING: Reducing active logical volume to 25,00 GiB
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce root? [y/n]: y
Reducing logical volume root to 25,00 GiB
Logical volume root successfully resized
root@Tigersclaw:/dev/mapper# resize2fs /dev/kubuntu-vg/root
resize2fs 1.42.9 (4-Feb-2014)
Die Grösse des Dateisystems auf /dev/kubuntu-vg/root wird auf 6553600 (4k) Blöcke geändert.

Das Dateisystem auf /dev/kubuntu-vg/root ist nun 6553600 Blöcke groß.
root@Tigersclaw:/dev/mapper# lvreduce –version
LVM version: 2.02.98(2) (2012-10-15)
Library version: 1.02.77 (2012-10-15)
Driver version: 4.27.0

Hello, an alternate method to shrink a logical volume filesystem based on the example you gave is as follows:

lvreduce -L200G -r /dev/polar/root.

The ‘-r’ flag initiates the resize2fs command.

This also works on the “restore” command line of the Ubuntu stable image (14.04.3) –most people probably wouldn’t know how to access it but Ctrl-Alt-F2 from any screen will get you there. Very safe and very efficient. Thanks!

I love this guide. I was able to reduce the lvm partition on my fedora 25 workstation. Now I’m going to install Windows 10 on the free space just to mess around a bit ?

Thx Man!

I have been looking for this information for 2 days! All other guides I tried screwed up my disk. I wanted to reduce a Ubuntu Server 16.04 with LVM root partition. I stupidly first created the partition to be 60GB. Then when I wanted to imported it to my VMware ESXi server, I found out that less than 6GB were really used. I tough importing it with thin provision would shrink the partition but it stayed 60GB even tough it only used 6G. I used your guide and was able to bring it back to 10G. Thanks again!

df -h doesn’t give me the right size of usage and unusage of LVM volume which has been mounted, luckily resize2fs doesn’t cause data loss, it reports the size is less than the existing data or similar. And finally after some tryings I have to use 1.08 times more than the df output to bit the resize2fs unlike 0.9 times as you say. So how do I get the usage of volume in proper way?

After resizing, I still can not find any Free PE, although the Partition size has come down. How can I see the lost free space and use it?

Probably worth adding the -p for progress option, as resizing several TB can take a while, and it’s good to know if you can have a kip in the meanwhile…

My shrink failed. ( I am learning Linux so new to the platform….)
I followed most instructions to shrink the LV and file system but now my drive won’t boot?
It was a test install which used the whole drive as one large 128 GB LV.
…i booted with Live Ubuntu 18.04.1 DVD as that is what I used to install the OS.

It was a new install of Ubuntu 18.04.1 which I am testing on my drive(test machine) and the default took the whole drive as one LVM.
Yes …I know … I was naive to accept the default install options for Linux.
However, I did a similar install with Ubuntu version 16 and that sliced the disk up into a root 19 GB partition and extended partition with remainder of disk.

So there was a change in default install behaviour for Ubuntu.

No big issue here …time to learn more Linux….right!
So nice to have it foobar on the test machine!

My mission was to resize/shrink the Ubuntu v 18.04.1 volume.
It created a 119 GB volume and filesystem. Sheesh! lol
…No data to backup!
(it is good to have curve balls when testing! )

I ran the commands mentioned and It did succeed in reducing the Logical volume and File system without errors when I ran the commands..
I did have to umount first as the first command mounted them in ubuntu?
I used Ubuntu 18..04.1 Live CD. ( I thought it should work! ) ( Didn’t use the recommended RescueCD !!!!)

The commands exececuted and seemed to work flawlessly.
I have to say thanks for the author. It was a good guide which stepped me through the process.
No flaws to the Author and warnings were noted.
Kudos to the Author. …sincerely….as it was a good walkthrough.

Perhaps i did not shrink the volume enough?
Anyway, my install erroring with some strange stuff…boots…buts fails to load up.
It’s a reinstall for me and retest the shrink with recommended LIVE CD.
…the shrink failed for me on ubuntu.

I must try this shrink again with the recommended LiveCD.
They are not all equal I know!
Obviously, I want to be able to shrink/grow/resize volumes without issue.

It should’ve worked!
More testing of this feature.

Comments are closed.