它不是特定于 digital ocean 的,因此非常好验证一下这是否是预期的行为。
我正在尝试使用来自ElasticSearch itself的 Helm chart 在DO管理的Kubernetes集群上设置ElasticSearch集群
他们说我需要在storageClassName
中指定一个volumeClaimTemplate
才能使用托管kubernetes服务提供的卷。对于DO,根据其docs为do-block-storages
。似乎也不必定义PVC,掌 Helm chart 应该自己做。
这是我正在使用的配置
# Specify node pool
nodeSelector:
doks.digitalocean.com/node-pool: elasticsearch
# Shrink default JVM heap.
esJavaOpts: "-Xmx128m -Xms128m"
# Allocate smaller chunks of memory per pod.
resources:
requests:
cpu: "100m"
memory: "512M"
limits:
cpu: "1000m"
memory: "512M"
# Specify Digital Ocean storage
# Request smaller persistent volumes.
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
storageClassName: do-block-storage
resources:
requests:
storage: 10Gi
extraInitContainers: |
- name: create
image: busybox:1.28
command: ['mkdir', '/usr/share/elasticsearch/data/nodes/']
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-master
- name: file-permissions
image: busybox:1.28
command: ['chown', '-R', '1000:1000', '/usr/share/elasticsearch/']
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-master
我正在使用Terraform设置Helm图表,但是无论如何都没关系,您将采用哪种方式:
resource "helm_release" "elasticsearch" {
name = "elasticsearch"
chart = "elastic/elasticsearch"
namespace = "elasticsearch"
values = [
file("charts/elasticsearch.yaml")
]
}
这是检查Pod日志时得到的信息:
51s Normal Provisioning persistentvolumeclaim/elasticsearch-master-elasticsearch-master-2 External provisioner is provisioning volume for claim "elasticsearch/elasticsearch-master-elasticsearch-master-2"
2m28s Normal ExternalProvisioning persistentvolumeclaim/elasticsearch-master-elasticsearch-master-2 waiting for a volume to be created, either by external provisioner "dobs.csi.digitalocean.com" or manually created by system administrator
我很确定问题出在数量上。它应该由kubernetes自动提供。描述持久性存储可以实现以下目的:
holms@debian ~/D/c/s/b/t/s/post-infra> kubectl describe pvc elasticsearch-master-elasticsearch-master-0 --namespace elasticsearch
Name: elasticsearch-master-elasticsearch-master-0
Namespace: elasticsearch
StorageClass: do-block-storage
Status: Pending
Volume:
Labels: app=elasticsearch-master
Annotations: volume.beta.kubernetes.io/storage-provisioner: dobs.csi.digitalocean.com
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: elasticsearch-master-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Provisioning 4m57s (x176 over 14h) dobs.csi.digitalocean.com_master-setupad-eu_04e43747-fafb-11e9-b7dd-e6fd8fbff586 External provisioner is provisioning volume for claim "elasticsearch/elasticsearch-master-elasticsearch-master-0"
Normal ExternalProvisioning 93s (x441 over 111m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "dobs.csi.digitalocean.com" or manually created by system administrator
我已经在Google上搜索了所有内容,似乎一切都正确,并且DO端的音量应该没有问题,但是它挂起了挂起状态。这是预期的行为,还是我应该请DO支持人员检查他们的情况?
最佳答案
是的,这是预期的行为。该图表可能与Digital Ocean Kubernetes服务不兼容。
Digital Ocean文档在“已知问题”部分中包含以下信息:
Support for resizing DigitalOcean Block Storage Volumes in Kubernetes has not yet been implemented.
In the DigitalOcean Control Panel, cluster resources (worker nodes, load balancers, and block storage volumes) are listed outside of the Kubernetes page. If you rename or otherwise modify these resources in the control panel, you may render them unusable to the cluster or cause the reconciler to provision replacement resources. To avoid this, manage your cluster resources exclusively with
kubectl
or from the control panel’s Kubernetes page.
在charts/stable/elasticsearch中提到了一些具体要求:
Prerequisites Details
- Kubernetes 1.10+
- PV dynamic provisioning support on the underlying infrastructure
您可以向Digital Ocean支持寻求帮助,或者尝试在没有 Helm 图的情况下部署ElasticSearch。
甚至在github上提到:
Automated testing of this chart is currently only run against GKE (Google Kubernetes Engine).
更新:
我的kubeadm ha群集上存在相同的问题。
但是我设法通过为
PersistentVolumes
手动创建storageclass
使其工作。我的存储类定义:
storageclass.yaml
:kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ssd
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
parameters:
type: pd-ssd
$ kubectl apply -f storageclass.yaml
$ kubectl get sc
NAME PROVISIONER AGE
ssd local 50m
我的PersistentVolume定义:
pv.yaml
:apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: ssd
capacity:
storage: 30Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <name of the node>
kubectl apply -f pv.yaml
之后,我运行了 Helm 图表:
helm install stable/elasticsearch --name my-release --set data.persistence.storageClass=ssd,data.storage=30Gi --set data.persistence.storageClass=ssd,master.storage=30Gi
PVC终于束缚了。
$ kubectl get pvc -A
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
default data-my-release-elasticsearch-data-0 Bound task-pv-volume2 30Gi RWO ssd 17m
default data-my-release-elasticsearch-master-0 Pending 17m
请注意,我仅手动满足仅一个pvc,而ElasticSearch手动卷配置可能效率很低。
我建议联系DO支持以获取自动卷配置解决方案。
关于elasticsearch - Digital Ocean管理的Kubernetes卷处于挂起状态,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58712145/