Kubernetes volumes are created & bound but won't mount #36

Closed
opened 2022-06-11 13:57:58 +03:00 by tonybogdanov · 2 comments

Following this guide: https://yourcmc.ru/git/vitalif/vitastor/src/branch/master/docs/installation/kubernetes.en.md I am able to get vitastor running in my GKE cluster.

I am also able to create PVCs, but anytime I try to mount them on a pod it's stuck in ContainerCreating mode.

The pod's events show this:

Unable to attach or mount volumes: unmounted volumes=[vitastor], unattached volumes=[vitastor kube-api-access-qtqgk]: timed out waiting for the condition
MountVolume.SetUp failed for volume "pvc-f7a5a63e-0fda-43eb-991e-2e2f93f97d2c" : rpc error: code = DeadlineExceeded desc = context deadline exceeded

The csi-vitastor-provisioner logs show this:

provision "vitastor-system/test-vitastor" class "vitastor": started
Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"vitastor-system", Name:"test-vitastor", UID:"f7a5a63e-0fda-43eb-991e-2e2f93f97d2c", APIVersion:"v1", ResourceVersion:"393248739", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "vitastor-system/test-vitastor"
GRPC call: /csi.v1.Controller/CreateVolume
GRPC request: {"capacity_range":{"required_bytes":536870912},"name":"pvc-f7a5a63e-0fda-43eb-991e-2e2f93f97d2c","parameters":{"csi.storage.k8s.io/pv/name":"pvc-f7a5a63e-0fda-43eb-991e-2e2f93f97d2c","csi.storage.k8s.io/pvc/name":"test-vitastor","csi.storage.k8s.io/pvc/namespace":"vitastor-system","etcdVolumePrefix":"","poolId":"1"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]}
received controller create volume request {"capacity_range":{"required_bytes":536870912},"name":"pvc-f7a5a63e-0fda-43eb-991e-2e2f93f97d2c","parameters":{"csi.storage.k8s.io/pv/name":"pvc-f7a5a63e-0fda-43eb-991e-2e2f93f97d2c","csi.storage.k8s.io/pvc/name":"test-vitastor","csi.storage.k8s.io/pvc/namespace":"vitastor-system","etcdVolumePrefix":"","poolId":"1"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]}
GRPC response: {"volume":{"capacity_bytes":536870912,"volume_id":"{\"name\":\"pvc-f7a5a63e-0fda-43eb-991e-2e2f93f97d2c\"}"}}
GRPC error: <nil>
create volume rep: {CapacityBytes:536870912 VolumeId:{"name":"pvc-f7a5a63e-0fda-43eb-991e-2e2f93f97d2c"} VolumeContext:map[] ContentSource:<nil> AccessibleTopology:[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
successfully created PV pvc-f7a5a63e-0fda-43eb-991e-2e2f93f97d2c for PVC test-vitastor and csi volume name {"name":"pvc-f7a5a63e-0fda-43eb-991e-2e2f93f97d2c"}
successfully created PV {GCEPersistentDisk:nil AWSElasticBlockStore:nil HostPath:nil Glusterfs:nil NFS:nil RBD:nil ISCSI:nil Cinder:nil CephFS:nil FC:nil Flocker:nil FlexVolume:nil AzureFile:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil PortworxVolume:nil ScaleIO:nil Local:nil StorageOS:nil CSI:&CSIPersistentVolumeSource{Driver:csi.vitastor.io,VolumeHandle:{"name":"pvc-f7a5a63e-0fda-43eb-991e-2e2f93f97d2c"},ReadOnly:false,FSType:ext4,VolumeAttributes:map[string]string{storage.kubernetes.io/csiProvisionerIdentity: 1654943718880-8081-csi.vitastor.io,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,}}
provision "vitastor-system/test-vitastor" class "vitastor": volume "pvc-f7a5a63e-0fda-43eb-991e-2e2f93f97d2c" provisioned
provision "vitastor-system/test-vitastor" class "vitastor": succeeded
Saving volume pvc-f7a5a63e-0fda-43eb-991e-2e2f93f97d2c
Volume pvc-f7a5a63e-0fda-43eb-991e-2e2f93f97d2c saved
Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"vitastor-system", Name:"test-vitastor", UID:"f7a5a63e-0fda-43eb-991e-2e2f93f97d2c", APIVersion:"v1", ResourceVersion:"393248739", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-f7a5a63e-0fda-43eb-991e-2e2f93f97d2c
Claim processing succeeded, removing PVC f7a5a63e-0fda-43eb-991e-2e2f93f97d2c from claims in progress
Started VA processing "csi-1089c56560da488b5a30a6e4cde9afcb973a1869ea66ed8d75a097783bb4a970"
Trivial sync[csi-1089c56560da488b5a30a6e4cde9afcb973a1869ea66ed8d75a097783bb4a970] started
Marking as attached "csi-1089c56560da488b5a30a6e4cde9afcb973a1869ea66ed8d75a097783bb4a970"
Marked as attached "csi-1089c56560da488b5a30a6e4cde9afcb973a1869ea66ed8d75a097783bb4a970"
Marked VolumeAttachment csi-1089c56560da488b5a30a6e4cde9afcb973a1869ea66ed8d75a097783bb4a970 as attached
Started VA processing "csi-1089c56560da488b5a30a6e4cde9afcb973a1869ea66ed8d75a097783bb4a970"
Trivial sync[csi-1089c56560da488b5a30a6e4cde9afcb973a1869ea66ed8d75a097783bb4a970] started

After that point I only get successfully renewed lease logs.

What am I missing?

P.S. I'm using vitastor 0.7.0 because hub.docker.com doesn't have 0.7.1.

Following this guide: https://yourcmc.ru/git/vitalif/vitastor/src/branch/master/docs/installation/kubernetes.en.md I am able to get vitastor running in my GKE cluster. I am also able to create PVCs, but anytime I try to mount them on a pod it's stuck in ContainerCreating mode. The pod's events show this: ``` Unable to attach or mount volumes: unmounted volumes=[vitastor], unattached volumes=[vitastor kube-api-access-qtqgk]: timed out waiting for the condition MountVolume.SetUp failed for volume "pvc-f7a5a63e-0fda-43eb-991e-2e2f93f97d2c" : rpc error: code = DeadlineExceeded desc = context deadline exceeded ``` The `csi-vitastor-provisioner` logs show this: ``` provision "vitastor-system/test-vitastor" class "vitastor": started Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"vitastor-system", Name:"test-vitastor", UID:"f7a5a63e-0fda-43eb-991e-2e2f93f97d2c", APIVersion:"v1", ResourceVersion:"393248739", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "vitastor-system/test-vitastor" GRPC call: /csi.v1.Controller/CreateVolume GRPC request: {"capacity_range":{"required_bytes":536870912},"name":"pvc-f7a5a63e-0fda-43eb-991e-2e2f93f97d2c","parameters":{"csi.storage.k8s.io/pv/name":"pvc-f7a5a63e-0fda-43eb-991e-2e2f93f97d2c","csi.storage.k8s.io/pvc/name":"test-vitastor","csi.storage.k8s.io/pvc/namespace":"vitastor-system","etcdVolumePrefix":"","poolId":"1"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]} received controller create volume request {"capacity_range":{"required_bytes":536870912},"name":"pvc-f7a5a63e-0fda-43eb-991e-2e2f93f97d2c","parameters":{"csi.storage.k8s.io/pv/name":"pvc-f7a5a63e-0fda-43eb-991e-2e2f93f97d2c","csi.storage.k8s.io/pvc/name":"test-vitastor","csi.storage.k8s.io/pvc/namespace":"vitastor-system","etcdVolumePrefix":"","poolId":"1"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]} GRPC response: {"volume":{"capacity_bytes":536870912,"volume_id":"{\"name\":\"pvc-f7a5a63e-0fda-43eb-991e-2e2f93f97d2c\"}"}} GRPC error: <nil> create volume rep: {CapacityBytes:536870912 VolumeId:{"name":"pvc-f7a5a63e-0fda-43eb-991e-2e2f93f97d2c"} VolumeContext:map[] ContentSource:<nil> AccessibleTopology:[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} successfully created PV pvc-f7a5a63e-0fda-43eb-991e-2e2f93f97d2c for PVC test-vitastor and csi volume name {"name":"pvc-f7a5a63e-0fda-43eb-991e-2e2f93f97d2c"} successfully created PV {GCEPersistentDisk:nil AWSElasticBlockStore:nil HostPath:nil Glusterfs:nil NFS:nil RBD:nil ISCSI:nil Cinder:nil CephFS:nil FC:nil Flocker:nil FlexVolume:nil AzureFile:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil PortworxVolume:nil ScaleIO:nil Local:nil StorageOS:nil CSI:&CSIPersistentVolumeSource{Driver:csi.vitastor.io,VolumeHandle:{"name":"pvc-f7a5a63e-0fda-43eb-991e-2e2f93f97d2c"},ReadOnly:false,FSType:ext4,VolumeAttributes:map[string]string{storage.kubernetes.io/csiProvisionerIdentity: 1654943718880-8081-csi.vitastor.io,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,}} provision "vitastor-system/test-vitastor" class "vitastor": volume "pvc-f7a5a63e-0fda-43eb-991e-2e2f93f97d2c" provisioned provision "vitastor-system/test-vitastor" class "vitastor": succeeded Saving volume pvc-f7a5a63e-0fda-43eb-991e-2e2f93f97d2c Volume pvc-f7a5a63e-0fda-43eb-991e-2e2f93f97d2c saved Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"vitastor-system", Name:"test-vitastor", UID:"f7a5a63e-0fda-43eb-991e-2e2f93f97d2c", APIVersion:"v1", ResourceVersion:"393248739", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-f7a5a63e-0fda-43eb-991e-2e2f93f97d2c Claim processing succeeded, removing PVC f7a5a63e-0fda-43eb-991e-2e2f93f97d2c from claims in progress Started VA processing "csi-1089c56560da488b5a30a6e4cde9afcb973a1869ea66ed8d75a097783bb4a970" Trivial sync[csi-1089c56560da488b5a30a6e4cde9afcb973a1869ea66ed8d75a097783bb4a970] started Marking as attached "csi-1089c56560da488b5a30a6e4cde9afcb973a1869ea66ed8d75a097783bb4a970" Marked as attached "csi-1089c56560da488b5a30a6e4cde9afcb973a1869ea66ed8d75a097783bb4a970" Marked VolumeAttachment csi-1089c56560da488b5a30a6e4cde9afcb973a1869ea66ed8d75a097783bb4a970 as attached Started VA processing "csi-1089c56560da488b5a30a6e4cde9afcb973a1869ea66ed8d75a097783bb4a970" Trivial sync[csi-1089c56560da488b5a30a6e4cde9afcb973a1869ea66ed8d75a097783bb4a970] started ``` After that point I only get `successfully renewed lease` logs. What am I missing? P.S. I'm using vitastor 0.7.0 because hub.docker.com doesn't have 0.7.1.

Hi, where did you install Vitastor OSDs and Monitor?

Hi, where did you install Vitastor OSDs and Monitor?

It seems you didn't :-) I'll close the bug. You'll get a k8s operator soon ;-)

It seems you didn't :-) I'll close the bug. You'll get a k8s operator soon ;-)
Sign in to join this conversation.
No Label
No Milestone
No Assignees
2 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: vitalif/vitastor#36
There is no content yet.