[PVE-User] BTRFS...

Gilberto Nunes gilberto.nunes32 at gmail.com
Tue Feb 2 20:12:40 CET 2016


Wow man! This explanation help a lot....
Thanks so much....

2016-02-02 17:07 GMT-02:00 Adam Thompson <athompso at athompso.net>:

>
>
> On 16-02-02 11:24 AM, Gilberto Nunes wrote:
>
>> Hi
>>
>> And what if I work with BTRFS inside the VM???
>> The FS where VM image lay could be any other FS... Currently, I am use
>> GlusterFS + XFS.
>> I need LVM or BRTFS inside the VM, in order to resize disk partition...
>> And I am between LVM or BRTFS....
>>
>
> Only if you need to do *online* resizes (without unmounting the
> filesystem).  If you can live with unmounting the filesystem, plain old
> ext3 (and ext4) can do what you need.  Of course, if it's the root
> filesystem you need to resize, the only way to unmount it is to shut down
> the VM and reboot it in single-user mode.  I think you might need to boot
> off a CD to resize the root fs, can't remember if there's a way around it.
>
> But let's say you create your VMs with two vdisks: one for boot+root
> (these don't even have to be separate partitions), one for /home.
> To make this a concrete example, let's say Disk 0 (sda) is 10GB and Disk 1
> (sdb) is 10GB.
> Ensure the VM has hot-plugging enabled for disk devices.
> Let's say /dev/sda1 (/) is ext4 and /dev/sdb1 (/home) is ext4.  No LVM and
> no BTRFS.
> /dev/sdb is MBR-partitioned for simplicity.
> Make sure no processes at all are using /home.
> Unmount /home.
> In PVE management UI, resize Disk 1 to (for example) 20GB.
> If the Linux kernel doesn't pick up the change immediately, make sure you
> are running acpid, and you may want to run "rescan-scsi-bus.sh --remove
> --forcerescan" (in RPM: sg3_utils, and in DEB: scsitools) and/or "sfdisk
> --re-read /dev/sdb".
> Use fdisk/gdisk/parted/whatever to grow the partition.
> Now run "resize2fs /dev/sdb1" and it'll grow the filesystem to the new
> disk size.
> Finally remount /home and restart any services depending on it.
>
>
> Instead, if you used LVM, you could just add a new SCSI disk entirely (not
> resize the existing one), then "pvcreate /dev/sdc" + "vgextend
> my-existing-vg-name /dev/sdc" + "lvextend --extents +100%FREE --resizefs
> my-vg-name/mv-lv-name /dev/sdc".  It's definitely
> simpler/easier/less-disruptive.
>
> (You can also resize the PV in LVM, but if zero-downtime is the goal, it's
> much easier to keep adding new SCSI devices and extending the VG+LV.)
>
> Or do you need to *shrink* the disks?  That's a whole different story... I
> don't think PVE can even shrink virtual disks in the first place.  You can
> do the first scenario I described above, except instead, use "resize2fs -M
> /dev/sdb1" to shrink it instead.
>
> If you consistently need to shrink filesystems, I would use ext3 on top of
> LVM; use "resize2fs -M /dev/vg/lv" first, then "lvreduce --size -10G
> /dev/vg/lv" but be REALLY careful with the size you provide to lvreduce!
> (You can also use lvresize, there isn't much difference.)
>
> Lastly, if you're using QCOW2 for VM disks, that's a whole 'nother level
> you have to worry about if you're trying to shrink disks; if you're using
> virt-scsi, I think you can do an "fstrim /home" first to signal that some
> blocks can be reclaimed, then just compact the QCOW2 file.  (See
> https://pve.proxmox.com/wiki/Shrink_Qcow2_Disk_Files for the older
> approach.)  Then you have to take the VM offline, copy the QCOW2 out to a
> smaller, compacted, file, and reboot the VM with that file attached instead
> of the original QCOW2 file.
>
>
> You'll need to provide *exact* details of what your actual goal is here,
> if nothing I've said so far covers it.
>
>
> -Adam
>
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>



-- 

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.proxmox.com/pipermail/pve-user/attachments/20160202/32f2ff4f/attachment.htm>


More information about the pve-user mailing list