Patch cinder driver: create snapshot from volume & create image #8

Closed
huynp911 wants to merge 4 commits from huynp911/vitastor:master into master
  1. Fix create snapshot from volume
    vol_name is the new ID of the snapshot, not the parent ID of snapshot volume. That's why _get_name function can't find any valid volume

  2. Add failure case of creating image to avoid infinite loop

1. Fix create snapshot from volume `vol_name` is the new ID of the snapshot, not the parent ID of snapshot volume. That's why `_get_name` function can't find any valid volume 2. Add failure case of creating image to avoid infinite loop
huynp911 added 1 commit 2021-12-22 07:07:57 +03:00
huynp911 added 1 commit 2022-02-08 12:37:11 +03:00
huynp911 added 1 commit 2022-02-08 12:44:42 +03:00
huynp911 added 1 commit 2022-02-08 12:46:08 +03:00
huynp911 closed this pull request 2022-05-23 12:21:52 +03:00
huynp911 reopened this pull request 2022-05-23 12:30:18 +03:00
vitalif requested changes 2022-06-05 00:39:55 +03:00
@ -456,3 +461,3 @@
snap_name = utils.convert_str(snapshot.name)
snap = self._get_image(vol_name+'@'+snap_name)
snap = self._get_image("volume-"+snapshot.volume_id+'@'+snap_name)

Doesn't it give the same result?

Doesn't it give the same result?

No it not the same, for example:
volume layer 1 (ID1) --> snapshot layer 1 (ID2) --> volume layer 2 (ID3)

I want to create a volume from the snapshot whose name is volume-ID1@snapshot-ID2.
But the problem is, vol_name has value volume-ID3, not volume-ID1. And cinder will try to find volume-ID3@snapshot-ID2, which not exist.
So I use snapshot.volume_id which has value ID1, and add a prefix volume-, and cinder will able to find the snapshot.

No it not the same, for example: volume layer 1 (ID1) --> snapshot layer 1 (ID2) --> volume layer 2 (ID3) I want to create a volume from the snapshot whose name is `volume-ID1@snapshot-ID2`. But the problem is, `vol_name` has value `volume-ID3`, not `volume-ID1`. And cinder will try to find `volume-ID3@snapshot-ID2`, which not exist. So I use `snapshot.volume_id` which has value ID1, and add a prefix `volume-`, and cinder will able to find the snapshot.

Ok, I see

Ok, I see
@ -525,3 +530,3 @@
args = [
'vitastor-cli', 'rm-data', '--pool', str(kv['value']['pool_id']),
'--inode', str(kv['value']['id']), '--progress', '0',
'--inode', str(kv['value']['id']), '--iodepth', '4', '--progress', '0',

4 is the default :-), why did you want to change it?

4 is the default :-), why did you want to change it?

My centos cluster can't delete a large volume with higher iodepth. (it causes OSDs restart constantly, as I mentioned before)

After switching to debian, this issue is gone. So you can ignore this :v

My centos cluster can't delete a large volume with higher iodepth. (it causes OSDs restart constantly, as I mentioned before) After switching to debian, this issue is gone. So you can ignore this :v

There was a bug in inode removal which I finally fixed in 0.7.1 - it was listing too many PGs at once regardless of iodepth and parallel_osds setting :-) so maybe now your problem won't reproduce anyway...

There was a bug in inode removal which I finally fixed in 0.7.1 - it was listing too many PGs at once regardless of iodepth and parallel_osds setting :-) so maybe now your problem won't reproduce anyway...
@ -585,2 +590,4 @@
**cfg, 'name': vol_name,
}) } },
], 'failure': [
{ 'request_put': { 'key': 'index/maxid/'+pool_s, 'value': image_id } },

It's definitely not good, the idea is:

  1. check current maxid counter
  2. create an image with the new ID only if maxid didn't change between 1 and 2
  3. restart from 1 if maxid changed
It's definitely not good, the idea is: 1. check current maxid counter 2. create an image with the new ID only if maxid didn't change between 1 and 2 3. restart from 1 if maxid changed

What if I put an inode using etcdctl: id = 1, maxid at that time was still 0.

You can see the conflict here, maxid = 0 but inode with ID 1 already existed.
-> openstack cinder can't create a new volume

What if I put an inode using etcdctl: id = 1, maxid at that time was still 0. You can see the conflict here, maxid = 0 but inode with ID 1 already existed. -> openstack cinder can't create a new volume

Ok, so you want to fix the case where some inodes are created by hand and index/maxid is absent or incorrect?

I think in this case you should do a request_range on failure, check if the inode that you tried to create actually exists, and then scan available inode numbers if it does...

Or maybe I'll implement it in vitastor-cli create and then I'll just rework cinder driver to use vitastor-cli instead of direct etcd communication...

Ok, so you want to fix the case where some inodes are created by hand and index/maxid is absent or incorrect? I think in this case you should do a request_range on failure, check if the inode that you tried to create actually exists, and then scan available inode numbers if it does... Or maybe I'll implement it in `vitastor-cli create` and then I'll just rework cinder driver to use vitastor-cli instead of direct etcd communication...

Hi, sorry, I missed your PR. :-)

Hi, sorry, I missed your PR. :-)

Ok, I merged only the first change with volume from snapshot creation fix.

Ok, I merged only the first change with volume from snapshot creation fix.
vitalif closed this pull request 2022-06-06 15:47:06 +03:00

The case with missing /index/maxid was already handled in vitastor-cli create, but it had a bug which I just fixed in master

The case with missing /index/maxid was already handled in `vitastor-cli create`, but it had a bug which I just fixed in master

Pull request closed

Sign in to join this conversation.
No reviewers
No Label
No Milestone
No Assignees
2 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: vitalif/vitastor#8
There is no content yet.