run qemu error #1

Closed
by Ghost opened 1 year ago · 17 comments
Ghost commented 1 year ago

hi, when i install vitastor on centos7 (kernel 5.9), I get the following output

linux# LD_PRELOAD=/usr/lib64/qemu-kvm/block-vitastor.so qemu-img convert -f qcow2 /root/vm-disk-1.qcow2 -p -O raw 'vitastor:etcd_host=192.168.30.102\:2379/v3:pool=1:inode=1:size=2148073472'
qemu-img: symbol lookup error: /usr/lib64/qemu-kvm/block-vitastor.so: undefined symbol: bdrv_has_zero_init_1

i try building from source and run the same command,get

linux# LD_PRELOAD=/usr/lib64/qemu-kvm/block-vitastor.so qemu-img convert -f qcow2 /root/vm-disk-1.qcow2 -p -O raw 'vitastor:etcd_host=192.168.30.102\:2379/v3:pool=1:inode=1:size=2148073472'
qemu-img: symbol lookup error: /usr/lib64/qemu-kvm/block-vitastor.so: undefined symbol: PreallocMode_lookup

then rebuild QEMU(3.1.1) with patch (qemu-3.1-vitastor.patch) ,but get

linux@qemu3.1.1# ./qemu-img convert -f qcow2 /root/vm-disk-1.qcow2 -p -O raw 'vitastor:etcd_host=192.168.30.102\:2379/v3:pool=1:inode=1:size=2148073472'
qemu-img: Unknown protocol 'vitastor'

who can help me?
thinks!

hi, when i install vitastor on centos7 (kernel 5.9), I get the following output ```shell linux# LD_PRELOAD=/usr/lib64/qemu-kvm/block-vitastor.so qemu-img convert -f qcow2 /root/vm-disk-1.qcow2 -p -O raw 'vitastor:etcd_host=192.168.30.102\:2379/v3:pool=1:inode=1:size=2148073472' qemu-img: symbol lookup error: /usr/lib64/qemu-kvm/block-vitastor.so: undefined symbol: bdrv_has_zero_init_1 ``` i try building from source and run the same command,get ```shell linux# LD_PRELOAD=/usr/lib64/qemu-kvm/block-vitastor.so qemu-img convert -f qcow2 /root/vm-disk-1.qcow2 -p -O raw 'vitastor:etcd_host=192.168.30.102\:2379/v3:pool=1:inode=1:size=2148073472' qemu-img: symbol lookup error: /usr/lib64/qemu-kvm/block-vitastor.so: undefined symbol: PreallocMode_lookup ``` then rebuild QEMU(3.1.1) with patch (qemu-3.1-vitastor.patch) ,but get ```shell linux@qemu3.1.1# ./qemu-img convert -f qcow2 /root/vm-disk-1.qcow2 -p -O raw 'vitastor:etcd_host=192.168.30.102\:2379/v3:pool=1:inode=1:size=2148073472' qemu-img: Unknown protocol 'vitastor' ``` who can help me? thinks!
Owner

Hi, I'll try to help you :) there's nobody else here anyway :))

I'll recheck it and reply here.

Hi, I'll try to help you :) there's nobody else here anyway :)) I'll recheck it and reply here.
Owner

OK, it seems like a build issue.

qemu-img: symbol lookup error

This error probably means you're trying to use the block driver from a mismatching QEMU build.

You must build everything from source following the instruction if you want custom QEMU. Packaged versions can only be used with QEMU from the same repository.

By the way, where did you get qemu 3.1 for centos 7?

OK, it seems like a build issue. > qemu-img: symbol lookup error This error probably means you're trying to use the block driver from a mismatching QEMU build. You must build everything from source following the instruction if you want custom QEMU. Packaged versions can only be used with QEMU from the same repository. By the way, where did you get qemu 3.1 for centos 7?

I get qemu-3.1 source from https://download.qemu.org/qemu-3.1.1.tar.xz.

I get qemu-3.1 source from https://download.qemu.org/qemu-3.1.1.tar.xz.
Owner

Oh, so you tried to build it from source. Yeah, it's definitely possible, just copy headers to Vitastor build directory. Like stated here: https://yourcmc.ru/git/vitalif/vitastor/#user-content-building-from-source

Oh, so you tried to build it from source. Yeah, it's definitely possible, just copy headers to Vitastor build directory. Like stated here: https://yourcmc.ru/git/vitalif/vitastor/#user-content-building-from-source

when I install vitastor following the instruction, it can't found fio-3.7.1

linux# yum install vitastor
...
Error: Package: vitastor-0.5-1.el7.x86_64 (vitastor)
           Requires: fio = 3.7-1.el7
           Available: fio-3.1-1.el7.x86_64 (epel)
               fio = 3.1-1.el7
           Available: fio-3.7-2.el7.x86_64 (base)
               fio = 3.7-2.el7
          	file / from install of etcd-3.4.13_12402-2.x86_64 conflicts with file from package filesystem-3.2-25.el7.x86_64
	file /usr/local from install of etcd-3.4.13_12402-2.x86_64 conflicts with file from package filesystem-3.2-25.el7.x86_64
	file /usr/local/bin from install of etcd-3.4.13_12402-2.x86_64 conflicts with file from package filesystem-3.2-25.el7.x86_64

I got fio-3.7.1 from http://vault.centos.org/7.8.2003/os/Source/SPackages/fio-3.7-1.el7.src.rpm ,but get

linux# fio -thread -ioengine=/usr/lib64/vitastor/libfio_cluster.so -name=test -bs=4M -direct=1 -iodepth=1 -rw=write -etcd=192.168.30.102:2379/v3 -pool=1 -inode=1 -size=32G
test: (g=0): rw=write, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=vitastor_cluster, iodepth=1
fio-3.7
Starting 1 thread
src/tcmalloc.cc:284] Attempt to free invalid pointer 0x7f00a0010920 
Aborted

if use fio-3.7 build from https://github.com/axboe/fio/releases/tag/fio-3.7, the fio process can't stop and there is no io on the disk show by iostat.

linux# ./fio -thread=1 -ioengine=/usr/lib64/vitastor/libfio_cluster.so -name=test -bs=4M -direct=1 -iodepth=1 -rw=randwrite -etcd=192.168.30.102:2379/v3 -pool=1 -inode=1 -size=10G -runtime=60
test: (g=0): rw=randwrite, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=vitastor_cluster, iodepth=1
fio-3.7
Starting 1 thread
Jobs: 1 (f=1): [w(1)][72.5%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 01m:00s]

linux# iostat -dmx 2
Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
nvme0n1           0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdc               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00

and I rebuild qemu-2.0.0(get from yumdownloader --source qemu) it still cat't work.
so where can i get the correct package?

when I install vitastor following the instruction, it can't found fio-3.7.1 ```shell linux# yum install vitastor ... Error: Package: vitastor-0.5-1.el7.x86_64 (vitastor) Requires: fio = 3.7-1.el7 Available: fio-3.1-1.el7.x86_64 (epel) fio = 3.1-1.el7 Available: fio-3.7-2.el7.x86_64 (base) fio = 3.7-2.el7 file / from install of etcd-3.4.13_12402-2.x86_64 conflicts with file from package filesystem-3.2-25.el7.x86_64 file /usr/local from install of etcd-3.4.13_12402-2.x86_64 conflicts with file from package filesystem-3.2-25.el7.x86_64 file /usr/local/bin from install of etcd-3.4.13_12402-2.x86_64 conflicts with file from package filesystem-3.2-25.el7.x86_64 ``` I got fio-3.7.1 from http://vault.centos.org/7.8.2003/os/Source/SPackages/fio-3.7-1.el7.src.rpm ,but get ```shell linux# fio -thread -ioengine=/usr/lib64/vitastor/libfio_cluster.so -name=test -bs=4M -direct=1 -iodepth=1 -rw=write -etcd=192.168.30.102:2379/v3 -pool=1 -inode=1 -size=32G test: (g=0): rw=write, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=vitastor_cluster, iodepth=1 fio-3.7 Starting 1 thread src/tcmalloc.cc:284] Attempt to free invalid pointer 0x7f00a0010920 Aborted ``` if use fio-3.7 build from https://github.com/axboe/fio/releases/tag/fio-3.7, the fio process can't stop and there is no io on the disk show by iostat. ```shell linux# ./fio -thread=1 -ioengine=/usr/lib64/vitastor/libfio_cluster.so -name=test -bs=4M -direct=1 -iodepth=1 -rw=randwrite -etcd=192.168.30.102:2379/v3 -pool=1 -inode=1 -size=10G -runtime=60 test: (g=0): rw=randwrite, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=vitastor_cluster, iodepth=1 fio-3.7 Starting 1 thread Jobs: 1 (f=1): [w(1)][72.5%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 01m:00s] linux# iostat -dmx 2 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ``` and I rebuild qemu-2.0.0(get from yumdownloader --source qemu) it still cat't work. so where can i get the correct package?
Owner

so where can i get the correct package?

Build Vitastor from source, it seems like the most reliable way %)

fio was updated to 3.7-2 in centos 7, that's why vitastor doesn't install. I'm rebuilding it now. Some centos 7 repos (centos-sclo-rh-source, centos-sclo-sclo-testing) are broken currently so my build Dockerfile requires some manual intervention...

With 3.7-1 it should work, no idea why you get this error:

[src/tcmalloc.cc:284] Attempt to free invalid pointer 0x7f00a0010920

This one:

if use fio-3.7 build from https://github.com/axboe/fio/releases/tag/fio-3.7, the fio process can’t stop and there is no io on the disk show by iostat.

Means that vitastor client is initalised correctly, but your cluster is down (or there's no cluster at all %)). Client waits for the cluster to come up forever if it's down (maybe I should make it just die if the cluster isn't initialised at all).

And yes, fio doesn't stop on Ctrl-C (that's fio's feature). You can kill it with Ctrl-Z, then killall -9 fio in this case.

Why do you look at iostat at all? Do you have Vitastor OSDs deployed on these devices?

> so where can i get the correct package? Build Vitastor from source, it seems like the most reliable way %) fio was updated to 3.7-2 in centos 7, that's why vitastor doesn't install. I'm rebuilding it now. Some centos 7 repos (centos-sclo-rh-source, centos-sclo-sclo-testing) are broken currently so my build Dockerfile requires some manual intervention... With 3.7-1 it should work, no idea why you get this error: > [src/tcmalloc.cc:284] Attempt to free invalid pointer 0x7f00a0010920 This one: > if use fio-3.7 build from https://github.com/axboe/fio/releases/tag/fio-3.7, the fio process can’t stop and there is no io on the disk show by iostat. Means that vitastor client is initalised correctly, but your cluster is down (or there's no cluster at all %)). Client waits for the cluster to come up forever if it's down (maybe I should make it just die if the cluster isn't initialised at all). And yes, fio doesn't stop on Ctrl-C (that's fio's feature). You can kill it with Ctrl-Z, then `killall -9 fio` in this case. Why do you look at iostat at all? Do you have Vitastor OSDs deployed on these devices?
Owner

I checked qemu-img on centos 7, related to your original error (in fact I haven't checked it originally):

linux# LD_PRELOAD=/usr/lib64/qemu-kvm/block-vitastor.so qemu-img convert -f qcow2 /root/vm-disk-1.qcow2 -p -O raw 'vitastor:etcd_host=192.168.30.102\:2379/v3:pool=1:inode=1:size=2148073472'
qemu-img: symbol lookup error: /usr/lib64/qemu-kvm/block-vitastor.so: undefined symbol: bdrv_has_zero_init_1

Oops. QEMU 2.0 doesn't support dynamic loading of block drivers at all so Vitastor driver can't be loaded. I was hoping it would load anyway, but no.

So for QEMU 2.0 the only way to build Vitastor driver is to include it into QEMU source tree, which I haven't tried at all, and I probably don't want to try it. I'll rather try to package newer QEMU for centos 7...

For now try to build everything from source or use Debian.

// By the way, I discovered CentOS 8 build is also broken, I'll fix it soon. :-)

I checked qemu-img on centos 7, related to your original error (in fact I haven't checked it originally): ``` linux# LD_PRELOAD=/usr/lib64/qemu-kvm/block-vitastor.so qemu-img convert -f qcow2 /root/vm-disk-1.qcow2 -p -O raw 'vitastor:etcd_host=192.168.30.102\:2379/v3:pool=1:inode=1:size=2148073472' qemu-img: symbol lookup error: /usr/lib64/qemu-kvm/block-vitastor.so: undefined symbol: bdrv_has_zero_init_1 ``` Oops. QEMU 2.0 doesn't support dynamic loading of block drivers at all so Vitastor driver can't be loaded. I was hoping it would load anyway, but no. So for QEMU 2.0 the only way to build Vitastor driver is to include it into QEMU source tree, which I haven't tried at all, and I probably don't want to try it. I'll rather try to package newer QEMU for centos 7... For now try to build everything from source or use Debian. // By the way, I discovered CentOS 8 build is also broken, I'll fix it soon. :-)
Owner

By the way, if you know where to get QEMU >= 3.x for CentOS 7 - tell me :)

By the way, if you know where to get QEMU >= 3.x for CentOS 7 - tell me :)

Why do you look at iostat at all? Do you have Vitastor OSDs deployed on these devices?
yes, there is 3 node and every node has 2 osd.

root      3456  0.8  0.3 11561100 401420 pts/2 Sl+  Nov30  16:15 /usr/local/bin/etcd -name etcd1 --data-dir /var/lib/etcd1.etcd --advertise-client-urls http://192.168.30.102:2379 --listen-client-urls http://192.168.30.102:2379 --initial-advertise-peer-urls http://192.168.30.102:2380 --listen-peer-urls http://192.168.30.102:2380 --initial-cluster-token vitastor-etcd-1 --initial-cluster etcd1=http://192.168.30.102:2380,etcd2=http://192.168.30.118:2380,etcd3=http://192.168.15.10:2380 --initial-cluster-state new --max-txn-ops=100000 --auto-compaction-retention=10 --auto-compaction-mode=revision
root      3487  0.0  0.1 150484 131588 pts/7   SL+  Nov30   0:05 /usr/bin/vitastor-osd --etcd_address 192.168.30.102:2379/v3 --bind_address 192.168.30.102 --osd_num 2 --disable_data_fsync 1 --disable_device_lock 1 --immediate_commit all --flusher_count 8 --disk_alignment 4096 --journal_block_size 4096 --meta_block_size 4096 --journal_no_same_sector_overwrites true --journal_sector_buffer_count 1024 --journal_offset 0 --meta_offset 16777216 --data_offset 123817984 --data_size 499984044032 --data_device /dev/disk/by-id/ata-Samsung_SSD_860_EVO_500GB_S3Z3NB1KB15149N
root      3493  0.0  0.0 148436 125156 pts/3   SL+  Nov30   0:05 /usr/bin/vitastor-osd --etcd_address 192.168.30.102:2379/v3 --bind_address 192.168.30.102 --osd_num 1 --disable_data_fsync 1 --disable_device_lock 1 --immediate_commit all --flusher_count 8 --disk_alignment 4096 --journal_block_size 4096 --meta_block_size 4096 --journal_no_same_sector_overwrites true --journal_sector_buffer_count 1024 --journal_offset 0 --meta_offset 16777216 --data_offset 123817984 --data_size 499984044032 --data_device /dev/disk/by-id/ata-Samsung_SSD_860_EVO_500GB_S3Z3NB1KB14653T

is there any error in start osd (data_offset/journal_offset is OK?) or etcd cmd? osd state is

[root@vnode1 ~]# etcdctl --endpoints http://192.168.30.102:2379 get --prefix /vitastor/osd/state
/vitastor/osd/state/1
{"addresses": ["192.168.30.102"], "blockstore_enabled": true, "host": "vnode1", "port": 46325, "primary_enabled": true, "state": "up"}
/vitastor/osd/state/2
{"addresses": ["192.168.30.102"], "blockstore_enabled": true, "host": "vnode1", "port": 39703, "primary_enabled": true, "state": "up"}
/vitastor/osd/state/3
{"addresses": ["192.168.30.118"], "blockstore_enabled": true, "host": "vnode2", "port": 38495, "primary_enabled": true, "state": "up"}
/vitastor/osd/state/4
{"addresses": ["192.168.30.118"], "blockstore_enabled": true, "host": "vnode2", "port": 32785, "primary_enabled": true, "state": "up"}
/vitastor/osd/state/5
{"addresses": ["192.168.15.10"], "blockstore_enabled": true, "host": "vnode3", "port": 43969, "primary_enabled": true, "state": "up"}
/vitastor/osd/state/6
{"addresses": ["192.168.15.10"], "blockstore_enabled": true, "host": "vnode3", "port": 45647, "primary_enabled": true, "state": "up"}

but query pg state no output,

[root@vnode1 ~]# etcdctl --endpoints http://192.168.30.102:2379 get --prefix /vitastor/pg/state
> Why do you look at iostat at all? Do you have Vitastor OSDs deployed on these devices? yes, there is 3 node and every node has 2 osd. ```shell root 3456 0.8 0.3 11561100 401420 pts/2 Sl+ Nov30 16:15 /usr/local/bin/etcd -name etcd1 --data-dir /var/lib/etcd1.etcd --advertise-client-urls http://192.168.30.102:2379 --listen-client-urls http://192.168.30.102:2379 --initial-advertise-peer-urls http://192.168.30.102:2380 --listen-peer-urls http://192.168.30.102:2380 --initial-cluster-token vitastor-etcd-1 --initial-cluster etcd1=http://192.168.30.102:2380,etcd2=http://192.168.30.118:2380,etcd3=http://192.168.15.10:2380 --initial-cluster-state new --max-txn-ops=100000 --auto-compaction-retention=10 --auto-compaction-mode=revision root 3487 0.0 0.1 150484 131588 pts/7 SL+ Nov30 0:05 /usr/bin/vitastor-osd --etcd_address 192.168.30.102:2379/v3 --bind_address 192.168.30.102 --osd_num 2 --disable_data_fsync 1 --disable_device_lock 1 --immediate_commit all --flusher_count 8 --disk_alignment 4096 --journal_block_size 4096 --meta_block_size 4096 --journal_no_same_sector_overwrites true --journal_sector_buffer_count 1024 --journal_offset 0 --meta_offset 16777216 --data_offset 123817984 --data_size 499984044032 --data_device /dev/disk/by-id/ata-Samsung_SSD_860_EVO_500GB_S3Z3NB1KB15149N root 3493 0.0 0.0 148436 125156 pts/3 SL+ Nov30 0:05 /usr/bin/vitastor-osd --etcd_address 192.168.30.102:2379/v3 --bind_address 192.168.30.102 --osd_num 1 --disable_data_fsync 1 --disable_device_lock 1 --immediate_commit all --flusher_count 8 --disk_alignment 4096 --journal_block_size 4096 --meta_block_size 4096 --journal_no_same_sector_overwrites true --journal_sector_buffer_count 1024 --journal_offset 0 --meta_offset 16777216 --data_offset 123817984 --data_size 499984044032 --data_device /dev/disk/by-id/ata-Samsung_SSD_860_EVO_500GB_S3Z3NB1KB14653T ``` is there any error in start osd (data_offset/journal_offset is OK?) or etcd cmd? osd state is ```shell [root@vnode1 ~]# etcdctl --endpoints http://192.168.30.102:2379 get --prefix /vitastor/osd/state /vitastor/osd/state/1 {"addresses": ["192.168.30.102"], "blockstore_enabled": true, "host": "vnode1", "port": 46325, "primary_enabled": true, "state": "up"} /vitastor/osd/state/2 {"addresses": ["192.168.30.102"], "blockstore_enabled": true, "host": "vnode1", "port": 39703, "primary_enabled": true, "state": "up"} /vitastor/osd/state/3 {"addresses": ["192.168.30.118"], "blockstore_enabled": true, "host": "vnode2", "port": 38495, "primary_enabled": true, "state": "up"} /vitastor/osd/state/4 {"addresses": ["192.168.30.118"], "blockstore_enabled": true, "host": "vnode2", "port": 32785, "primary_enabled": true, "state": "up"} /vitastor/osd/state/5 {"addresses": ["192.168.15.10"], "blockstore_enabled": true, "host": "vnode3", "port": 43969, "primary_enabled": true, "state": "up"} /vitastor/osd/state/6 {"addresses": ["192.168.15.10"], "blockstore_enabled": true, "host": "vnode3", "port": 45647, "primary_enabled": true, "state": "up"} ``` but query pg state no output, ```shell [root@vnode1 ~]# etcdctl --endpoints http://192.168.30.102:2379 get --prefix /vitastor/pg/state ```

um, i can't found qemu-3.1 for centos 7 , i download source from https://download.qemu.org/qemu-3.1.1.tar.xz.

um, i can't found qemu-3.1 for centos 7 , i download source from https://download.qemu.org/qemu-3.1.1.tar.xz.
Owner

OK, I think your OSDs are fine, but did you create any pools? And did you start the monitor service so it could create PGs?

Check /vitastor/config/pgs - are there any PG definitions?

OK, I think your OSDs are fine, but did you create any pools? And did you start the monitor service so it could create PGs? Check /vitastor/config/pgs - are there any PG definitions?
Owner

um, i can't found qemu-3.1 for centos 7 , i download source from https://download.qemu.org/qemu-3.1.1.tar.xz.

Did it build successfully, does centos 7 include all required dependencies?

> um, i can't found qemu-3.1 for centos 7 , i download source from https://download.qemu.org/qemu-3.1.1.tar.xz. Did it build successfully, does centos 7 include all required dependencies?

yes,i create testpool and run the monitor service.

[root@vnode1 ~]# etcdctl --endpoints http://192.168.30.102:2379 get --prefix  /vitastor/config/pools
/vitastor/config/pools
{"1":{"name":"testpool","scheme":"replicated","pg_size":2,"pg_minsize":1,"pg_count":256,"failure_domain":"host"}}


[root@vnode1 ~]# etcdctl --endpoints http://192.168.30.102:2379 get --prefix  /vitastor/config/pgs
/vitastor/config/pgs
{"items":{"1":{"1":{"osd_set":["2","4"],"primary":"2"},"2":{"osd_set":["2","4"],"primary":"2"},"3":{"osd_set":["2","4"],"primary":"4"},"4":{"osd_set":["2","4"],"primary":"4"},"5":{"osd_set":["2","4"],"primary":"2"},"6":{"osd_set":["2","4"],"primary":"4"},"7":{"osd_set":["2","4"],"primary":"4"},"8":{"osd_set":["2","4"],"primary":"2"},"9":{"osd_set":["2","4"],"primary":"2"},...,"256":{"osd_set":["3","5"],"primary":"3"}}},"hash":"a533843eb35c2efbe44dff2aa345ca5e3f12d7f6"}

[root@vnode1 ~]# ps aux|grep mon-main
root     11121  1.8  0.0 567200 35776 pts/5    Sl+  09:03   0:00 node /usr/lib/vitastor/mon/mon-main.js --etcd_url http://192.168.30.102:2379,http://192.168.30.118:2379,http://192.168.15.10:2379 --etcd_prefix /vitastor --etcd_start_timeout 5

Did it build successfully, does centos 7 include all required dependencies?

yes, all dependencies is ok.

yes,i create testpool and run the monitor service. ```shell [root@vnode1 ~]# etcdctl --endpoints http://192.168.30.102:2379 get --prefix /vitastor/config/pools /vitastor/config/pools {"1":{"name":"testpool","scheme":"replicated","pg_size":2,"pg_minsize":1,"pg_count":256,"failure_domain":"host"}} [root@vnode1 ~]# etcdctl --endpoints http://192.168.30.102:2379 get --prefix /vitastor/config/pgs /vitastor/config/pgs {"items":{"1":{"1":{"osd_set":["2","4"],"primary":"2"},"2":{"osd_set":["2","4"],"primary":"2"},"3":{"osd_set":["2","4"],"primary":"4"},"4":{"osd_set":["2","4"],"primary":"4"},"5":{"osd_set":["2","4"],"primary":"2"},"6":{"osd_set":["2","4"],"primary":"4"},"7":{"osd_set":["2","4"],"primary":"4"},"8":{"osd_set":["2","4"],"primary":"2"},"9":{"osd_set":["2","4"],"primary":"2"},...,"256":{"osd_set":["3","5"],"primary":"3"}}},"hash":"a533843eb35c2efbe44dff2aa345ca5e3f12d7f6"} [root@vnode1 ~]# ps aux|grep mon-main root 11121 1.8 0.0 567200 35776 pts/5 Sl+ 09:03 0:00 node /usr/lib/vitastor/mon/mon-main.js --etcd_url http://192.168.30.102:2379,http://192.168.30.118:2379,http://192.168.15.10:2379 --etcd_prefix /vitastor --etcd_start_timeout 5 ``` > Did it build successfully, does centos 7 include all required dependencies? yes, all dependencies is ok.
Owner

If /vitastor/config/pgs is there /vitastor/pg/state/xxx/yyy should also be there.
Please check OSD logs. What do OSDs say?

If /vitastor/config/pgs is there /vitastor/pg/state/xxx/yyy should also be there. Please check OSD logs. What do OSDs say?
Owner

OK. I just packaged qemu 4.2 for centos 7 (took the spec from centos 8, patched it and disabled some things) and updated the repository. Now packages install well.

OK. I just packaged qemu 4.2 for centos 7 (took the spec from centos 8, patched it and disabled some things) and updated the repository. Now packages install well.

thinks, the qemu-img is ok , but there still a issue on qemu-kvm:

[root@vnode1 libexec]# /usr/libexec/qemu-kvm -enable-kvm -m 1024 -drive 'file=vitastor:etcd_host=192.168.30.102\:2379/v3:pool=1:inode=1:size=2148073472',format=raw,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=off,physical_block_size=4096,logical_block_size=512 -vnc 0.0.0.0:0 
qemu-kvm: -drive file=vitastor:etcd_host=192.168.30.102\:2379/v3:pool=1:inode=1:size=2148073472,format=raw,if=none,id=drive-virtio-disk0,cache=none: Driver 'vitastor' is not whitelisted
[root@vnode1 libexec]# /usr/libexec/qemu-kvm -drive format=?
Supported formats: blkdebug copy-on-read file gluster host_device iscsi luks nbd null-co nvme qcow2 raw throttle
Supported formats (read-only): blkdebug copy-on-read file gluster host_device https iscsi luks nbd null-co nvme qcow2 raw ssh throttle vhdx vmdk vpc

as this https://lists.centos.org/pipermail/centos-virt/2017-April/005504.html said add this opt to whitelisted will solve the issue.

thinks, the qemu-img is ok , but there still a issue on qemu-kvm: ```shell [root@vnode1 libexec]# /usr/libexec/qemu-kvm -enable-kvm -m 1024 -drive 'file=vitastor:etcd_host=192.168.30.102\:2379/v3:pool=1:inode=1:size=2148073472',format=raw,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=off,physical_block_size=4096,logical_block_size=512 -vnc 0.0.0.0:0 qemu-kvm: -drive file=vitastor:etcd_host=192.168.30.102\:2379/v3:pool=1:inode=1:size=2148073472,format=raw,if=none,id=drive-virtio-disk0,cache=none: Driver 'vitastor' is not whitelisted [root@vnode1 libexec]# /usr/libexec/qemu-kvm -drive format=? Supported formats: blkdebug copy-on-read file gluster host_device iscsi luks nbd null-co nvme qcow2 raw throttle Supported formats (read-only): blkdebug copy-on-read file gluster host_device https iscsi luks nbd null-co nvme qcow2 raw ssh throttle vhdx vmdk vpc ``` as this https://lists.centos.org/pipermail/centos-virt/2017-April/005504.html said add this opt to whitelisted will solve the issue.
Owner

OK, I removed the whitelist.

Then this one started to reproduce on CentOS 7:

linux# fio -thread -ioengine=/usr/lib64/vitastor/libfio_cluster.so -name=test -bs=4M -direct=1 -iodepth=1 -rw=write -etcd=192.168.30.102:2379/v3 -pool=1 -inode=1 -size=32G
test: (g=0): rw=write, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=vitastor_cluster, iodepth=1
fio-3.7
Starting 1 thread
src/tcmalloc.cc:284] Attempt to free invalid pointer 0x7f00a0010920 
Aborted

This is caused by some kind of allocator (glibc vs tcmalloc) conflict. It seems tcmalloc tries to free() a pointer allocated by glibc. It goes away if you run it with LD_PRELOAD=/lib64/libtcmalloc.so. Interestingly it doesn't reproduce on CentOS 8 or Debian. So I fixed it for CentOS 7 by rebuilding QEMU with tcmalloc... Another point is that I should probably investigate the actual benefit of tcmalloc, maybe it doesn't make much sense for the single-threaded Vitastor OSD.

Then I also discovered that QEMU 4.2 was missing some EFI roms in ipxe-roms-qemu (/usr/share/ipxe.efi), because CentOS 7's version doesn't include them. So I also rebuilt ipxe-roms-qemu from CentOS 8.

Also I packaged jerasure library and rebuilt Vitastor with jerasure support added recently.

CentOS 7 packages seem to finally work...

OK, I removed the whitelist. Then this one started to reproduce on CentOS 7: ``` linux# fio -thread -ioengine=/usr/lib64/vitastor/libfio_cluster.so -name=test -bs=4M -direct=1 -iodepth=1 -rw=write -etcd=192.168.30.102:2379/v3 -pool=1 -inode=1 -size=32G test: (g=0): rw=write, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=vitastor_cluster, iodepth=1 fio-3.7 Starting 1 thread src/tcmalloc.cc:284] Attempt to free invalid pointer 0x7f00a0010920 Aborted ``` This is caused by some kind of allocator (glibc vs tcmalloc) conflict. It seems tcmalloc tries to free() a pointer allocated by glibc. It goes away if you run it with `LD_PRELOAD=/lib64/libtcmalloc.so`. Interestingly it doesn't reproduce on CentOS 8 or Debian. So I fixed it for CentOS 7 by rebuilding QEMU with tcmalloc... Another point is that I should probably investigate the actual benefit of tcmalloc, maybe it doesn't make much sense for the single-threaded Vitastor OSD. Then I also discovered that QEMU 4.2 was missing some EFI roms in ipxe-roms-qemu (/usr/share/ipxe.efi), because CentOS 7's version doesn't include them. So I also rebuilt ipxe-roms-qemu from CentOS 8. Also I packaged jerasure library and rebuilt Vitastor with jerasure support added recently. CentOS 7 packages seem to finally work...
vitalif closed this issue 12 months ago
Sign in to join this conversation.
No Label
No Milestone
No Assignees
2 Participants
Notifications
Due Date

No due date set.

Dependencies

This issue currently doesn't have any dependencies.

Loading…
There is no content yet.