Patch cinder driver: create snapshot from volume & create image #8

Closed
huynp911 wants to merge 4 commits from huynp911/vitastor:master into master
1 changed files with 9 additions and 2 deletions

View File

@ -1,3 +1,8 @@
# Workaround:
# 461: allow to create volume from snapshot
# 527: adjust iodepth for faster deletion
# 591: add failure case when creating a volume
# Vitastor Driver for OpenStack Cinder # Vitastor Driver for OpenStack Cinder
# #
# -------------------------------------------- # --------------------------------------------
@ -455,7 +460,7 @@ class VitastorDriver(driver.CloneableImageVD,
vol_name = utils.convert_str(volume.name) vol_name = utils.convert_str(volume.name)
snap_name = utils.convert_str(snapshot.name) snap_name = utils.convert_str(snapshot.name)
snap = self._get_image(vol_name+'@'+snap_name) snap = self._get_image("volume-"+snapshot.volume_id+'@'+snap_name)

Doesn't it give the same result?

Doesn't it give the same result?

No it not the same, for example:
volume layer 1 (ID1) --> snapshot layer 1 (ID2) --> volume layer 2 (ID3)

I want to create a volume from the snapshot whose name is volume-ID1@snapshot-ID2.
But the problem is, vol_name has value volume-ID3, not volume-ID1. And cinder will try to find volume-ID3@snapshot-ID2, which not exist.
So I use snapshot.volume_id which has value ID1, and add a prefix volume-, and cinder will able to find the snapshot.

No it not the same, for example: volume layer 1 (ID1) --> snapshot layer 1 (ID2) --> volume layer 2 (ID3) I want to create a volume from the snapshot whose name is `volume-ID1@snapshot-ID2`. But the problem is, `vol_name` has value `volume-ID3`, not `volume-ID1`. And cinder will try to find `volume-ID3@snapshot-ID2`, which not exist. So I use `snapshot.volume_id` which has value ID1, and add a prefix `volume-`, and cinder will able to find the snapshot.

Ok, I see

Ok, I see
if not snap: if not snap:
raise exception.SnapshotNotFound(snapshot_id = snap_name) raise exception.SnapshotNotFound(snapshot_id = snap_name)
snap_inode_id = int(resp['responses'][0]['kvs'][0]['value']['id']) snap_inode_id = int(resp['responses'][0]['kvs'][0]['value']['id'])
@ -524,7 +529,7 @@ class VitastorDriver(driver.CloneableImageVD,
for kv in layers: for kv in layers:
args = [ args = [
'vitastor-cli', 'rm-data', '--pool', str(kv['value']['pool_id']), 'vitastor-cli', 'rm-data', '--pool', str(kv['value']['pool_id']),
'--inode', str(kv['value']['id']), '--progress', '0', '--inode', str(kv['value']['id']), '--iodepth', '4', '--progress', '0',

4 is the default :-), why did you want to change it?

4 is the default :-), why did you want to change it?

My centos cluster can't delete a large volume with higher iodepth. (it causes OSDs restart constantly, as I mentioned before)

After switching to debian, this issue is gone. So you can ignore this :v

My centos cluster can't delete a large volume with higher iodepth. (it causes OSDs restart constantly, as I mentioned before) After switching to debian, this issue is gone. So you can ignore this :v

There was a bug in inode removal which I finally fixed in 0.7.1 - it was listing too many PGs at once regardless of iodepth and parallel_osds setting :-) so maybe now your problem won't reproduce anyway...

There was a bug in inode removal which I finally fixed in 0.7.1 - it was listing too many PGs at once regardless of iodepth and parallel_osds setting :-) so maybe now your problem won't reproduce anyway...
*(self._vitastor_args()) *(self._vitastor_args())
] ]
try: try:
@ -584,6 +589,8 @@ class VitastorDriver(driver.CloneableImageVD,
{ 'request_put': { 'key': 'config/inode/'+pool_s+'/'+str(image_id), 'value': json.dumps({ { 'request_put': { 'key': 'config/inode/'+pool_s+'/'+str(image_id), 'value': json.dumps({
**cfg, 'name': vol_name, **cfg, 'name': vol_name,
}) } }, }) } },
], 'failure': [
{ 'request_put': { 'key': 'index/maxid/'+pool_s, 'value': image_id } },

It's definitely not good, the idea is:

  1. check current maxid counter
  2. create an image with the new ID only if maxid didn't change between 1 and 2
  3. restart from 1 if maxid changed
It's definitely not good, the idea is: 1. check current maxid counter 2. create an image with the new ID only if maxid didn't change between 1 and 2 3. restart from 1 if maxid changed

What if I put an inode using etcdctl: id = 1, maxid at that time was still 0.

You can see the conflict here, maxid = 0 but inode with ID 1 already existed.
-> openstack cinder can't create a new volume

What if I put an inode using etcdctl: id = 1, maxid at that time was still 0. You can see the conflict here, maxid = 0 but inode with ID 1 already existed. -> openstack cinder can't create a new volume

Ok, so you want to fix the case where some inodes are created by hand and index/maxid is absent or incorrect?

I think in this case you should do a request_range on failure, check if the inode that you tried to create actually exists, and then scan available inode numbers if it does...

Or maybe I'll implement it in vitastor-cli create and then I'll just rework cinder driver to use vitastor-cli instead of direct etcd communication...

Ok, so you want to fix the case where some inodes are created by hand and index/maxid is absent or incorrect? I think in this case you should do a request_range on failure, check if the inode that you tried to create actually exists, and then scan available inode numbers if it does... Or maybe I'll implement it in `vitastor-cli create` and then I'll just rework cinder driver to use vitastor-cli instead of direct etcd communication...
] }) ] })
if not resp.get('succeeded'): if not resp.get('succeeded'):
# repeat # repeat