Hello

I had an interesting experience today where another user deployed a server using static partitions instead of making use of LVM.

So my question to you as a survey is whether to or not to LVM a host/server.

Do share your perspective, experience and comprehension of LVM.
2 months later
BashLogic wroteHello

I had an interesting experience today where another user deployed a server using static partitions instead of making use of LVM.

So my question to you as a survey is whether to or not to LVM a host/server.

Do share your perspective, experience and comprehension of LVM.
I've been away for a while, sorry for the late reply but hopefully it'll still help someone searching through the archive or help you make a point.

You have to at least ask yourself the following questions:
-Do you need data protection? no/yes: do you have hardware RAID?
-Do you have enough disk space for the rest of your server life / project needs? traditional partitions wont allow you to grow space, of course you can move data around but we don't want to go that route if we have a choice.
-If it's a hosting server and you need multiple filesystems, your hard disk device /dev/* can only take a limited number of primary and extended partitions, while LVM can create many.

I would always suggest the use of LVM on production systems.
well since no one else has taken an intereset in the topic, here is my pitch.

the recommendation is always to LVM unless you have explicit needs for "raw devices". raw devices are often required when asembing a cluster or the equivelent. nowadays, the requirements are quiet minimal, lets say even thou its an exagerated figure, 20%. so in 80% of your environments, you can fully deploy LVM without issues, on the contrary you will only benefit.

here is a case study as an example,

ive been setting up a new vps to migrate to. this vps has an 80gb disk. why use it all if you do not need it all? better to support an organic growth in the right places and not the wrong ones. so i partition my linux into a multitude of partitions, this included /var/ /var/audit /var/log /var/spool /home /tmp etc. so what is the benefit of this, well when you have capacity, you want to manage it wisely and it is always to be preemptive than reactive. if you design and allocate properly in the begining, you save yourself the headache and haslte later on when you all of a sudden notice that your filesystem is full and you start searching for what is consuming the capacity and start wondering on whether you can delete and you end up purchasing a larger disk and maybe reinstall your host. well in my case, i just gave the minimun, here is an example:

Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_sysroot-lv_sysroot
9.9G 3.1G 6.3G 33% /
tmpfs 1004M 100K 1004M 1% /dev/shm
/dev/sda1 485M 73M 387M 16% /boot
/dev/mapper/vg_sysroot-lv_data
1008M 34M 924M 4% /data
/dev/mapper/vg_sysroot-lv_home
5.0G 149M 4.6G 4% /home
/dev/mapper/vg_sysroot-lv_opt
2.0G 67M 1.9G 4% /opt
/dev/mapper/vg_sysroot-lv_tmp
9.9G 151M 9.2G 2% /tmp
/dev/mapper/vg_sysroot-lv_var
3.0G 263M 2.6G 10% /var
/dev/mapper/vg_sysroot-lv_var_log
2.0G 122M 1.8G 7% /var/log
/dev/mapper/vg_sysroot-lv_var_log_audit
2.0G 92M 1.8G 5% /var/log/audit
/dev/mapper/vg_sysroot-lv_var_tmp
1008M 34M 924M 4% /var/tmp


what the hell? i have an 80gb disk and ive allocated capacity in gigs? you may say that i am nuts! well yes I am nutts for peace of mind. I do not want the situation where if an application fills up my logs i end up with the whole filesystem full and the system crashes because i simply have excess logs, no thank you. i dont want that. it is often when i see production servers crash and there is loss of data because of such negligence.

but what does that mean, ive only given a few gigs per "partition" (a logical volume: LVM LV). well, i can anytime extend and LV and make it grow from 2gb to whatever (depending on distro version limitations). i can do that "online" ( i do not need to shutdown my server or use fancy apps), i just issue an lvextend command lvextend -l 30gb xxx xxx and presto its ready! but hold on, i only extended the LV, i didnt extend the filesystem. the OS still sees the previous size! no need to panic, i can issue a growfs (command varies depending on personal fs selection, i use XFS for its reliability and heavy duty performance). pronto! the filesystem has been extended online and everything is working smoothly.

now what if i want to make it smaller? well, LVM does not have an issue with that, you can shrink "online" and LVM will not complain. it is unfortunate to say that this is not the case with almost any linux/UNIX filesystem! so until this is developed into the existing filesystems, this is the situation that we have to live with, that is that the only downtime required is for when you want to shrink which is seldom the case (maybe 1->10% of the time). most of the time you just want to grow.

well that is one thing in regards to growing, but what about point in time copies, mobility and performance?
point in time copies
with LVM, you can perform snapshots? any one familiar with a snapshot? well simply explained, at the issue of the command, you get a point in time copy of the data which you can use either to rollback to or mount for backup operations or whatever your imagination can come up with when you want to multipurpose your data.
mobility
what the hell do i mean with mobility? have you ever wanted to swap a disk online? plug in a usb disk and transfer your lv (and later split the vg) ? well simply you issue a pvmove, move my stuff from disk a to disk b, while the server is in production use. now i can change my disks on the fly (that is if you have a good raid card and you are able to add new disks online). i can move my lv across online without any hastle (very usefull when working with tb of data in a production environment). there are other benefits to the mobility feature. for example for whatever your reason is, you can split your vg and run a separate set of operations against each vg. this is very usefull in a cluster environment.

performance
LVM can affect on performance? well yes it can, depending on your requirement and resources. nowadays, it has become a very thin line that distinguishes the hardware raid from software raid. it has been for quiet a while that it has been deduced that software raid runs at almost nearline speed. with todays hardware performance, a 1-3% nearline speed difference is not the bottleneck. so it is safe to deduce that software raid and software solutions are not only becoming more reliable and feasable to use but are also more feature rich and flexible giving you a plethora of options to work with and resolve issues.

so back to the story, with LVM (not mdadm), you can peform a raid0 mirror, you can perform a stripe mirror, etc. this provides you with options for performance as you require.

there are other things you can do with LVM but this thread would never end if i go into each of these.

as an example, this week i built a stripped LVM volume for performance.

i had 48 disks each was a 1tb disk. i built 3 separate VG with 1 LV on each of them. so do the maths 48/3=16. 16disks (1tb in size), i would say that that is big, not the largest that i have worked with but fair.
so why 16 1tb and not 8 2tb disks? well, the more disks you have the better performance you have in I/O and filesystem performance. so back to the story, each VG has 16 disks and a total size of 16tb in one LV. now what filesystem to use? i still consider ext3/4 to be lame and not ripe for real production operations. very often i have faced problems with them. XFS was my preference as it scales well and performs well. but hold on a second, i created an LV with 16 stripes! why and how does XFS outperform here in comparison to other fs? well XFS is of the few FS which you can fine tune as much as you can put your mind into it. so the magic was to "stripe" the filesystem across the LVM. since i knew that the LV is using 16 disks and i knew their size, it was easy for me to create a 256k stripe segment with the stripe length extending across the 16 disks. the performance result was outrageous!

as a comparison, i had a 1 disk vg with no optimization options compared to the 16disk setup. the 1disk gave an average perf of 160mb/s. the 16disk setup gave a stagering 970mb/s. do you see the difference now? do you see the benefit of LVM+FS performance possibilities?

well i will leave the rest for you to image and google-fu the internet.

all in all, use LVM when possible, select the appropriate FS for your usage. get the job done at the begining and not later after having a headache.
Interesting breakdown BashLogic, you're bang on with the proactive thinking for FS size and growth but your case study has a few debatable points:

Given the fact that you're analyzing the benefits or setbacks of LVM on a VPS, here's my take on that:

Performance: you have a VPS, I would be more interested knowing your hypervisor's overhead compared to the LVM overhead. Or LVM's overhead on top of your hypervisor's. Is it running on Xen? openvz? vmware?

DRP/Backup: You talk about Snapshots, but you're running a VM/VPS, do you have the option to take a full snapshot of your VPS? I would be more leaning toward that type of solutions instead of a LVM snapshot.

You mention swapping disks or moving data across devices, that doesn't apply in a VPS setup or does it in your case?

Proper filesystem partitioning should be a given.i.e: /var, /tmp, /home, /boot, etc... for the exact reasons you gave in your anaysis. However, To consider using LVM or not does not require that specific point IMHO to be taken into consideration as you can do that already with the regular partitioning without a volume manager.

In my opinion your 80G VPS setup does not require LVM because I would go back to chosing a VPS provider who already mirrors/RAID protect my VPS image and that allows me to take snapshots of my full VM/VPS.
If you're concerned about extending a filesystem, some of the well known VPS providers can add space to your mount points or filesystems and you can grow those with ext4 the same way you would do with LVM.

In case your VPS provider's setup does not allow you to do what I mentioned above then going with LVM is a good choice, but I would suggest a VPS provider with a Xen or VMware hypervisor where you can backup your VM, grow filesystems, and more...
OpenVZ VPS providers are for different needs IMHO and do not apply to your case.

I use Linode, they provide up to 160G VPS and their infrastructure is top notch. What VPS provider are you on?
borrowed from wikipedia:http://en.wikipedia.org/wiki/Logical_Volume_Manager_%28Linux%29

LVM is a logical volume manager for the Linux kernel; it manages disk drives and similar mass-storage devices, in particular large ones. The term "volume" refers to a disk drive or partition thereof. It was originally written in 1998 by Heinz Mauelshagen, who based its design on that of the LVM in HP-UX.

The abbreviation "LVM" can also refer to the Logical Volume Management available in HP-UX, IBM AIX and OS/2 operating systems.

LVM is suitable for:

Managing large hard disk farms by letting you add disks, replace disks, copy and share contents from one disk to another without disrupting service (hot swapping).
On small systems (like a desktop at home), instead of having to estimate at installation time how big a partition might need to be in the future, LVM allows you to resize your disk partitions easily as needed.
Making backups by taking "snapshots."
Creating single logical volumes of multiple physical volumes or entire hard disks (somewhat similar to RAID 0, but more similar to JBOD), allowing for dynamic volume resizing.

One can think of LVM as a thin software layer on top of the hard disks and partitions, which creates an illusion of continuity and ease-of-use for managing hard-drive replacement, repartitioning, and backup.
[edit] Features

The LVM can:

Resize volume groups online by absorbing new physical volumes (PV) or ejecting existing ones.
Resize logical volumes (LV) online by concatenating extents onto them or truncating extents from them.
Create read-only snapshots of logical volumes (LVM1).
Create read-write snapshots of logical volumes (LVM2).
Stripe whole or parts of logical volumes across multiple PVs, in a fashion similar to RAID 0.
Mirror whole or parts of logical volumes, in a fashion similar to RAID 1.
Move online logical volumes between PVs.
Split or merge volume groups in situ (as long as no logical volumes span the split). This can be useful when migrating whole logical volumes to or from offline storage.

The LVM will also work in a shared-storage cluster (where disks holding the PVs are shared between multiple host computers), but requires an additional daemon to propagate state changes between cluster nodes.

LVM does not:

Provide parity-based redundancy across LVs, as with RAID levels 3 through 6. This functionality is instead provided by the Linux multiple disk subsystem, which can be used as LVM physical volumes.
I did not dive into the subject, you just did with your last post :) my last reply was based on the facts that you posted, mainly on the VPS service. Your multi-tb setup with LVM is a no brainer.

I was referring to the "case study". LVM has many benefits for sure, and should always be a standard in many cases.
Side note

For the future, I think it'd be best to avoid long posts like that. It's not about size, I don't mind reading all that. It's about the fact that multiple debatable points are raised here, and giving the proper reply I have in mind would take two month + a 300 page book to write. I would love to develop a 'sysadmin' section in the forum, as recent threads are showing burgeoning signs of potential. However some efforts must be made to improve presentation. That means:

1- Favoring short posts with strong statement rather than pages with spread out info.
2- Improving presentation (@BashLogic, I'm talking to you. Please make the effort of capitalizing beginning of sentences and the word 'I'. It really makes the text easier to read. Also, if you want to write long paragraphs dividing your posts into sections with titles improve readability enormously.
3- Favoring links to valuable reference over the "trust me this is how it works" attitude. External links give value to a post and point the reader to useful documentation that can be re-used later on.

I am aware that these points (that should absolutely not be considered as rules, but rather as guidelines) add overhead to writing a post and take time, but they are, in my experience, a solid medium to building a strong community around the topic and allow us to create a real "System administration" section in the forum.

(NB: I mean no disrespect to any post, and this should not be taken as a negative criticism).

About this thread, the only thing I'm going to say is that Qemu is a nice "bells and whistles" addition to KVM and in my experience give it an edge over VMWare. But this is one example of the "debatable points" that were mentioned in the thread that offer no added value concerning the topic at hand: LVM or no LVM?
Point taken rahmu.

My first reply was geared towards LVM or no LVM but as soon as VPS was mentioned, I thought I'd throw in my 2c on the whole situation.