[PVE-User] create zfs pool issue

Pongrácz István pongracz.istvan at gmail.com
Tue Jul 28 10:34:34 CEST 2015


 Did you upgrade your system, especially zfs?

You should reboot your server after such an upgrade, because the kernel module and userland part cannot work together.

 

----------------eredeti üzenet----------------- 
Feladó: "lyt_yudi" lyt_yudi at icloud.com 
Címzett: "Pongrจขcz Istvจขn" 
 
CC: "proxmoxve" 
 
Dátum: Tue, 28 Jul 2015 14:33:33 +0800 
----------------------------------------------------------

>
>
>so, thanks you reply.
>
> 
>
>
>
>>
>>
>>在 2015年7月28日,下午2:10,Pongrácz István pongracz.istvan at gmail.com 
>>pongracz.istvan at gmail.com 写道:
>>
>>
>>Anyway, are you sure these drives are not part of existing pool?
>>Please check it before you destroy the previous possible pool.
>>
>>no. no pool before.
>>
>> 
>>
>>
>>>
>>>
>>>在 2015年7月28日,下午2:02,Pongrácz István pongracz.istvan at gmail.com 
>>>pongracz.istvan at gmail.com 写道:
>>>
>>>
>>>When you create a new pool, /dev/sdb is not really a valid device.
>>>
>>>Try to create a pool using their id, like this:
>>>
>>>
>>>zpool create -f -o ashift=12 tank mirror 
>>>ata-Hitachi_HTS545050A7E380_TEJ52139CA9VNS ata-Hitachi_HTS545050A7E380_TEJ52139CAVXNS mirror 
>>>ata-Hitachi_HTS545050A7E380_TEJ52139CBYP0S ata-WDC_WD5000BEVT-00A0RT0_WD-WXN1AB0X5490
>>>
>>>You can use ids from here: ls /dev/disk/by-id/
>>> 
>>>
>>>this is the new device. and new installed pve.
>>>
>>> 
>>>
>>>now, have same error:
>>>
>>> 
>>>
>>>
>>>root at test01 :~# ls -l /dev/disk/by-id/
>>>
>>>total 0
>>>
>>>lrwxrwxrwx 1 root root 10 Jul 28 14:19 dm-name-pve-data -> ../../dm-2
>>>
>>>lrwxrwxrwx 1 root root 10 Jul 28 14:19 dm-name-pve-root -> ../../dm-0
>>>
>>>lrwxrwxrwx 1 root root 10 Jul 28 14:19 dm-name-pve-swap -> ../../dm-1
>>>
>>>lrwxrwxrwx 1 root root 10 Jul 28 14:19 
>>>dm-uuid-LVM-aPi3sFbYQvfr6pfqSnlFw09WyB59R5Fs9N1R7uLnuXXbra9t5Z8C2O2KRwf
>>>Yjv9g -> ../../dm-0
>>>
>>>lrwxrwxrwx 1 root root 10 Jul 28 14:19 
>>>dm-uuid-LVM-aPi3sFbYQvfr6pfqSnlFw09WyB59R5FslCcuYnHf81IC4bVJSn6JS5ncZrJ
>>>e01Ed -> ../../dm-2
>>>
>>>lrwxrwxrwx 1 root root 10 Jul 28 14:19 
>>>dm-uuid-LVM-aPi3sFbYQvfr6pfqSnlFw09WyB59R5FswBYsrv7Dv92ras6Qejla7nqN2QN
>>>VwWSU -> ../../dm-1
>>>
>>>lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d49980e0a03a9af -> 
>>>../../sda
>>>
>>>lrwxrwxrwx 1 root root 10 Jul 28 14:19 
>>>scsi-36d4ae5209bc954001d49980e0a03a9af-part1 -> ../../sda1
>>>
>>>lrwxrwxrwx 1 root root 10 Jul 28 14:19 
>>>scsi-36d4ae5209bc954001d49980e0a03a9af-part2 -> ../../sda2
>>>
>>>lrwxrwxrwx 1 root root 10 Jul 28 14:19 
>>>scsi-36d4ae5209bc954001d49980e0a03a9af-part3 -> ../../sda3
>>>
>>>lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d49982a0bb99e94 -> 
>>>../../sdb
>>>
>>>lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d4998410d19abe2 -> 
>>>../../sdc
>>>
>>>lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d4998670f5dc028 -> 
>>>../../sdd
>>>
>>>lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d49987f10c71ca8 -> 
>>>../../sde
>>>
>>>lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d49989311f53449 -> 
>>>../../sdf
>>>
>>>lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d4998af13a574b6 -> 
>>>../../sdg
>>>
>>>lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d4998c414e00719 -> 
>>>../../sdh
>>>
>>>lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d4998d816111615 -> 
>>>../../sdi
>>>
>>>lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d4998ea172230cb -> 
>>>../../sdj
>>>
>>>lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d4998fd18429cc6 -> 
>>>../../sdk
>>>
>>>lrwxrwxrwx 1 root root 9 Jul 28 14:19 scsi-36d4ae5209bc954001d499910196ef02f -> 
>>>../../sdl
>>>
>>>lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d49980e0a03a9af -> 
>>>../../sda
>>>
>>>lrwxrwxrwx 1 root root 10 Jul 28 14:19 
>>>wwn-0x6d4ae5209bc954001d49980e0a03a9af-part1 -> ../../sda1
>>>
>>>lrwxrwxrwx 1 root root 10 Jul 28 14:19 
>>>wwn-0x6d4ae5209bc954001d49980e0a03a9af-part2 -> ../../sda2
>>>
>>>lrwxrwxrwx 1 root root 10 Jul 28 14:19 
>>>wwn-0x6d4ae5209bc954001d49980e0a03a9af-part3 -> ../../sda3
>>>
>>>lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d49982a0bb99e94 -> 
>>>../../sdb
>>>
>>>lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d4998410d19abe2 -> 
>>>../../sdc
>>>
>>>lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d4998670f5dc028 -> 
>>>../../sdd
>>>
>>>lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d49987f10c71ca8 -> 
>>>../../sde
>>>
>>>lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d49989311f53449 -> 
>>>../../sdf
>>>
>>>lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d4998af13a574b6 -> 
>>>../../sdg
>>>
>>>lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d4998c414e00719 -> 
>>>../../sdh
>>>
>>>lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d4998d816111615 -> 
>>>../../sdi
>>>
>>>lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d4998ea172230cb -> 
>>>../../sdj
>>>
>>>lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d4998fd18429cc6 -> 
>>>../../sdk
>>>
>>>lrwxrwxrwx 1 root root 9 Jul 28 14:19 wwn-0x6d4ae5209bc954001d499910196ef02f -> 
>>>../../sdl
>>>
>>>root at test01 :~# zpool create -f -o ashift=12 tank mirror 
>>>scsi-36d4ae5209bc954001d49982a0bb99e94 scsi-36d4ae5209bc954001d4998410d19abe2 mirror 
>>>scsi-36d4ae5209bc954001d4998670f5dc028 scsi-36d4ae5209bc954001d49987f10c71ca8
>>>
>>>cannot create 'tank01': invalid argument for this pool operation
>>>
>>> 
>>>
>>>root at test01 :~# zpool create -f -o ashift=12 tank mirror 
>>>wwn-0x6d4ae5209bc954001d49982a0bb99e94 wwn-0x6d4ae5209bc954001d4998410d19abe2 mirror 
>>>wwn-0x6d4ae5209bc954001d4998670f5dc028 wwn-0x6d4ae5209bc954001d49987f10c71ca8
>>>
>>>cannot create 'tank01': invalid argument for this pool operation
>>>
>>> 
>>>
>>>root at test01 :~# zpool status
>>>
>>>no pools available
>>>
>>>
>>> 
>>>
>>>
>>>>If you want to use their name, just use this:
>>>>
>>>>zpool create -f -o ashift=12 tank mirror sdb sdc mirror sdd sde
>>>>
>>>>The second variant is not recommended, because if your system boots up and you put a 
>>>>new disk, it is easy to mess the pool. Just avoid to use sdX as device, use their ids 
>>>>instead.
>>>>
>>>>root at test01 :~# zpool create -f -o ashift=12 tank mirror sdb sdc mirror sdd sde
>>>>
>>>>cannot create 'tank': invalid argument for this pool operation
>>>>
>>>> 
>>>>
>>>>and the pve system use lvm on sda.
>>>>
>>>>
>>>># df -h
>>>>
>>>>Filesystem Size Used Avail Use% Mounted on
>>>>
>>>>udev 10M 0 10M 0% /dev
>>>>
>>>>tmpfs 13G 476K 13G 1% /run
>>>>
>>>>/dev/mapper/pve-root 50G 1.2G 46G 3% /
>>>>
>>>>tmpfs 5.0M 0 5.0M 0% /run/lock
>>>>
>>>>tmpfs 26G 47M 26G 1% /run/shm
>>>>
>>>>/dev/mapper/pve-data 1.8T 33M 1.8T 1% /var/lib/vz
>>>>
>>>>/dev/fuse 30M 112K 30M 1% /etc/pve
>>>>
>>>>
>>>> 
>>>>
>>>>thanks.
>>>>
>>>> 
>>>>
>>>>
>>>> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.proxmox.com/pipermail/pve-user/attachments/20150728/e2dee811/attachment.htm>


More information about the pve-user mailing list