Patch cinder driver: create snapshot from volume & create image #8

Closed
huynp911 wants to merge 4 commits from huynp911/vitastor:master into master
  1. Fix create snapshot from volume
    vol_name is the new ID of the snapshot, not the parent ID of snapshot volume. That's why _get_name function can't find any valid volume

  2. Add failure case of creating image to avoid infinite loop

1. Fix create snapshot from volume `vol_name` is the new ID of the snapshot, not the parent ID of snapshot volume. That's why `_get_name` function can't find any valid volume 2. Add failure case of creating image to avoid infinite loop
huynp911 added 1 commit 8 months ago
huynp911 added 1 commit 6 months ago
huynp911 added 1 commit 6 months ago
huynp911 added 1 commit 6 months ago
huynp911 closed this pull request 3 months ago
huynp911 reopened this pull request 3 months ago
vitalif requested changes 2 months ago
snap_name = utils.convert_str(snapshot.name)
snap = self._get_image(vol_name+'@'+snap_name)
snap = self._get_image("volume-"+snapshot.volume_id+'@'+snap_name)
Poster
Owner

Doesn't it give the same result?

Doesn't it give the same result?
Poster

No it not the same, for example:
volume layer 1 (ID1) --> snapshot layer 1 (ID2) --> volume layer 2 (ID3)

I want to create a volume from the snapshot whose name is volume-ID1@snapshot-ID2.
But the problem is, vol_name has value volume-ID3, not volume-ID1. And cinder will try to find volume-ID3@snapshot-ID2, which not exist.
So I use snapshot.volume_id which has value ID1, and add a prefix volume-, and cinder will able to find the snapshot.

No it not the same, for example: volume layer 1 (ID1) --> snapshot layer 1 (ID2) --> volume layer 2 (ID3) I want to create a volume from the snapshot whose name is `volume-ID1@snapshot-ID2`. But the problem is, `vol_name` has value `volume-ID3`, not `volume-ID1`. And cinder will try to find `volume-ID3@snapshot-ID2`, which not exist. So I use `snapshot.volume_id` which has value ID1, and add a prefix `volume-`, and cinder will able to find the snapshot.
Poster
Owner

Ok, I see

Ok, I see
args = [
'vitastor-cli', 'rm-data', '--pool', str(kv['value']['pool_id']),
'--inode', str(kv['value']['id']), '--progress', '0',
'--inode', str(kv['value']['id']), '--iodepth', '4', '--progress', '0',
Poster
Owner

4 is the default :-), why did you want to change it?

4 is the default :-), why did you want to change it?
Poster

My centos cluster can't delete a large volume with higher iodepth. (it causes OSDs restart constantly, as I mentioned before)

After switching to debian, this issue is gone. So you can ignore this :v

My centos cluster can't delete a large volume with higher iodepth. (it causes OSDs restart constantly, as I mentioned before) After switching to debian, this issue is gone. So you can ignore this :v
Poster
Owner

There was a bug in inode removal which I finally fixed in 0.7.1 - it was listing too many PGs at once regardless of iodepth and parallel_osds setting :-) so maybe now your problem won't reproduce anyway...

There was a bug in inode removal which I finally fixed in 0.7.1 - it was listing too many PGs at once regardless of iodepth and parallel_osds setting :-) so maybe now your problem won't reproduce anyway...
**cfg, 'name': vol_name,
}) } },
], 'failure': [
{ 'request_put': { 'key': 'index/maxid/'+pool_s, 'value': image_id } },
Poster
Owner

It's definitely not good, the idea is:

  1. check current maxid counter
  2. create an image with the new ID only if maxid didn't change between 1 and 2
  3. restart from 1 if maxid changed
It's definitely not good, the idea is: 1. check current maxid counter 2. create an image with the new ID only if maxid didn't change between 1 and 2 3. restart from 1 if maxid changed
Poster

What if I put an inode using etcdctl: id = 1, maxid at that time was still 0.

You can see the conflict here, maxid = 0 but inode with ID 1 already existed.
-> openstack cinder can't create a new volume

What if I put an inode using etcdctl: id = 1, maxid at that time was still 0. You can see the conflict here, maxid = 0 but inode with ID 1 already existed. -> openstack cinder can't create a new volume
Poster
Owner

Ok, so you want to fix the case where some inodes are created by hand and index/maxid is absent or incorrect?

I think in this case you should do a request_range on failure, check if the inode that you tried to create actually exists, and then scan available inode numbers if it does...

Or maybe I'll implement it in vitastor-cli create and then I'll just rework cinder driver to use vitastor-cli instead of direct etcd communication...

Ok, so you want to fix the case where some inodes are created by hand and index/maxid is absent or incorrect? I think in this case you should do a request_range on failure, check if the inode that you tried to create actually exists, and then scan available inode numbers if it does... Or maybe I'll implement it in `vitastor-cli create` and then I'll just rework cinder driver to use vitastor-cli instead of direct etcd communication...
Owner

Hi, sorry, I missed your PR. :-)

Hi, sorry, I missed your PR. :-)
Owner

Ok, I merged only the first change with volume from snapshot creation fix.

Ok, I merged only the first change with volume from snapshot creation fix.
vitalif closed this pull request 2 months ago
Owner

The case with missing /index/maxid was already handled in vitastor-cli create, but it had a bug which I just fixed in master

The case with missing /index/maxid was already handled in `vitastor-cli create`, but it had a bug which I just fixed in master

Reviewers

vitalif requested changes 2 months ago
Please reopen this pull request to perform a merge.
Sign in to join this conversation.
No reviewers
No Label
No Milestone
No Assignees
2 Participants
Notifications
Due Date

No due date set.

Dependencies

This pull request currently doesn't have any dependencies.

Loading…
There is no content yet.