-
Notifications
You must be signed in to change notification settings - Fork 929
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
x509: certificate has expired or is not yet valid #2951
Comments
Found this here:
But I don't know what actually should I do/can I do? |
Fixed via OpenEBS Slack online help: The fix was:
Then
Thanks again I'll mention OpenEBS usage in CNCF here: #2719 (as requested on slack). I will do it today, ASAP, just need to fix other stuff that was broken due to my issue, some things need to be redeployed, restarted etc once all is green I’ll add my use case there |
Issues go stale after 90d of inactivity. |
I ran into this issue on my installation and wanted to add a few notes. In my case, I needed to delete the secret I don't understand why comments above are suggesting to delete the |
Thanks @dpedu ! |
Issues go stale after 90d of inactivity. Please comment or re-open the issue if you are still interested in getting this issue fixed. |
Issues go stale after 90d of inactivity. Please comment or re-open the issue if you are still interested in getting this issue fixed. |
@niladrih please check if this is still an issue. |
This is still an issue @avishnu afaik, #2951 (comment) is the resolution at the moment. |
Met this too with chart version cstor-3.4.0, app version 3.4.0 |
Description
Trying to create the following PVC:
One cluster is OK:
k apply -f tpv.yaml
-->persistentvolumeclaim/pvtest created
k delete -f tpv.yaml
-->persistentvolumeclaim "pvtest" deleted
Another cluster:
k apply -f tpv.yaml
-->Both clusters were deployed in the same way, have the same configuration, and both worked just fine for about 6 months until today (I didn't change anything on their config)
Expected Behavior
terminating
that namespace forever, after some time I've found that the root cause for this is the inability to delete that namespace's PVCs. They're now reported as "lost".Get namespaces:
PVCs in that namespace:
kubectl get nodes
:NAMESPACE NAME READY STATUS RESTARTS AGE
default local-storage-nfs-nfs-server-provisioner-0 1/1 Running 1 36d
dev-analytics-api-prod dev-analytics-api-7c7f56b8fc-448mh 1/1 Running 0 7h11m
dev-analytics-api-prod dev-analytics-api-7c7f56b8fc-jsksj 1/1 Running 0 7h11m
dev-analytics-api-prod dev-analytics-api-migration-1567762361901059583 0/1 Completed 0 180d
dev-analytics-ui dev-analytics-ui-6c984d9958-8z26v 1/1 Running 0 11d
json2hat json2hat-1581814860-dckpn 0/1 Completed 0 17d
json2hat json2hat-1582419660-jj47j 0/1 Completed 0 10d
json2hat json2hat-1583024460-rb7t8 0/1 Completed 0 3d15h
kibana kibana-7567969554-mvvp6 0/1 Running 0 31m
kibana kibana-7567969554-vnfcq 0/1 Running 0 31m
kube-system aws-node-lrh7d 1/1 Running 0 180d
kube-system aws-node-qnnjk 1/1 Running 0 180d
kube-system aws-node-rkn9z 1/1 Running 0 180d
kube-system aws-node-xp6mf 1/1 Running 1 180d
kube-system coredns-79d667b89f-22p4h 1/1 Running 0 42d
kube-system coredns-79d667b89f-fb8x5 1/1 Running 0 42d
kube-system kube-proxy-dv7bn 1/1 Running 0 180d
kube-system kube-proxy-mfpwx 1/1 Running 1 180d
kube-system kube-proxy-rs7d7 1/1 Running 0 180d
kube-system kube-proxy-sp9jm 1/1 Running 0 180d
mariadb backups-page-dcc76bd4d-kbzh9 1/1 Running 0 36d
mariadb mariadb-backups-1583203500-p4zmb 0/1 Completed 0 37h
mariadb mariadb-backups-1583289900-cvpqr 0/1 Completed 0 13h
mariadb mariadb-master-0 1/1 Running 3 153d
mariadb mariadb-slave-0 1/1 Running 0 153d
openebs maya-apiserver-c54455947-44k2r 1/1 Running 0 42d
openebs openebs-admission-server-b945f5f94-bwlgk 1/1 Running 0 30m
openebs openebs-localpv-provisioner-6c778bbc9c-7ckjp 1/1 Running 0 27m
openebs openebs-ndm-259t6 1/1 Running 1 180d
openebs openebs-ndm-dxhml 1/1 Running 0 180d
openebs openebs-ndm-ht9dw 1/1 Running 0 180d
openebs openebs-ndm-m725n 1/1 Running 0 180d
openebs openebs-ndm-operator-796785b89f-dfgq7 1/1 Running 0 36d
openebs openebs-provisioner-7db48b57f4-wq5nd 1/1 Running 1 42d
openebs openebs-snapshot-operator-5664bf5777-cssqm 2/2 Running 7 180d
sortinghat-api sortinghat-api-5f9d67cd8d-hx5kq 1/1 Running 0 180d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 443/TCP 182d
local-storage-nfs-nfs-server-provisioner ClusterIP 10.100.255.228 2049/TCP,20048/TCP,51413/TCP,51413/UDP 36d
NAME PROVISIONER AGE
gp2 (default) kubernetes.io/aws-ebs 182d
local-storage kubernetes.io/no-provisioner 182d
nfs-openebs-localstorage cluster.local/local-storage-nfs-nfs-server-provisioner 36d
openebs-device openebs.io/local 182d
openebs-hostpath openebs.io/local 182d
openebs-jiva-default openebs.io/provisioner-iscsi 182d
openebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter 182d
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-050a7761-d075-11e9-95db-024cac4c3a40 4Gi RWO Delete Bound mariadb/data-mariadb-master-0 openebs-hostpath 180d
pvc-050e5e66-d075-11e9-95db-024cac4c3a40 4Gi RWO Delete Bound mariadb/data-mariadb-slave-0 openebs-hostpath 180d
pvc-57f3604e-41c8-11ea-a753-022eab783cb2 4Gi RWO Delete Bound default/data-local-storage-nfs-nfs-server-provisioner-0 openebs-hostpath 36d
pvc-63fe9708-41c8-11ea-a753-022eab783cb2 4Gi RWX Delete Bound mariadb/mariadb-backups nfs-openebs-localstorage 36d
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-local-storage-nfs-nfs-server-provisioner-0 Bound pvc-57f3604e-41c8-11ea-a753-022eab783cb2 4Gi RWO openebs-hostpath 36d
NAME="Ubuntu"
VERSION="18.04.2 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.2 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
Linux ip-172-31-34-53 4.15.0-1043-aws #45-Ubuntu SMP Mon Jun 24 14:07:03 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
The text was updated successfully, but these errors were encountered: