Redis ha helm 图表错误 - NOREPLICAS 没有足够好的副本来写入

标签 redis kubernetes kubernetes-helm

我正在尝试在本地 kubernetes(适用于 windows 的 docker)上设置 redis-ha helm chart。

我正在使用的 helm 值文件是,

## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
image:
  repository: redis
  tag: 5.0.3-alpine
  pullPolicy: IfNotPresent
## replicas number for each component
replicas: 3

## Custom labels for the redis pod
labels: {}

## Pods Service Account
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
serviceAccount:
  ## Specifies whether a ServiceAccount should be created
  ##
  create: false
  ## The name of the ServiceAccount to use.
  ## If not set and create is true, a name is generated using the redis-ha.fullname template
  # name:

## Role Based Access
## Ref: https://kubernetes.io/docs/admin/authorization/rbac/
##

rbac:
  create: false

## Redis specific configuration options
redis:
  port: 6379
  masterGroupName: mymaster
  config:
    ## Additional redis conf options can be added below
    ## For all available options see http://download.redis.io/redis-stable/redis.conf
    min-slaves-to-write: 1
    min-slaves-max-lag: 5   # Value in seconds
    maxmemory: "0"       # Max memory to use for each redis instance. Default is unlimited.
    maxmemory-policy: "volatile-lru"  # Max memory policy to use for each redis instance. Default is volatile-lru.
    # Determines if scheduled RDB backups are created. Default is false.
    # Please note that local (on-disk) RDBs will still be created when re-syncing with a new slave. The only way to prevent this is to enable diskless replication.
    save: "900 1"
    # When enabled, directly sends the RDB over the wire to slaves, without using the disk as intermediate storage. Default is false.
    repl-diskless-sync: "yes"
    rdbcompression: "yes"
    rdbchecksum: "yes"

  ## Custom redis.conf files used to override default settings. If this file is
  ## specified then the redis.config above will be ignored.
  # customConfig: |-
      # Define configuration here

  resources: 
    requests:
      memory: 200Mi
      cpu: 100m
    limits:
      memory: 700Mi
      cpu: 250m

## Sentinel specific configuration options
sentinel:
  port: 26379
  quorum: 2
  config:
    ## Additional sentinel conf options can be added below. Only options that
    ## are expressed in the format simialar to 'sentinel xxx mymaster xxx' will
    ## be properly templated.
    ## For available options see http://download.redis.io/redis-stable/sentinel.conf
    down-after-milliseconds: 10000
    ## Failover timeout value in milliseconds
    failover-timeout: 180000
    parallel-syncs: 5

  ## Custom sentinel.conf files used to override default settings. If this file is
  ## specified then the sentinel.config above will be ignored.
  # customConfig: |-
      # Define configuration here

  resources: 
    requests:
      memory: 200Mi
      cpu: 100m
    limits:
      memory: 200Mi
      cpu: 250m

securityContext:
  runAsUser: 1000
  fsGroup: 1000
  runAsNonRoot: true

## Node labels, affinity, and tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}

# Prometheus exporter specific configuration options
exporter:
  enabled: false
  image: oliver006/redis_exporter
  tag: v0.31.0
  pullPolicy: IfNotPresent

  # prometheus port & scrape path
  port: 9121
  scrapePath: /metrics

  # cpu/memory resource limits/requests
  resources: {}

  # Additional args for redis exporter
  extraArgs: {}

podDisruptionBudget: {}
  # maxUnavailable: 1
  # minAvailable: 1

## Configures redis with AUTH (requirepass & masterauth conf params)
auth: false
# redisPassword:

## Use existing secret containing "auth" key (ignores redisPassword)
# existingSecret:

persistentVolume:
  enabled: true
  ## redis-ha data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  # storageClass: "-"
  accessModes:
    - ReadWriteOnce
  size: 1Gi
  annotations: {}
init:
  resources: {}

# To use a hostPath for data, set persistentVolume.enabled to false
# and define hostPath.path.
# Warning: this might overwrite existing folders on the host system!
hostPath:
  ## path is evaluated as template so placeholders are replaced
  # path: "/data/{{ .Release.Name }}"

  # if chown is true, an init-container with root permissions is launched to
  # change the owner of the hostPath folder to the user defined in the
  # security context
  chown: true

redis-ha 正在正确部署,当我执行 kubectl get all 时,

NAME                       READY     STATUS    RESTARTS   AGE
pod/rc-redis-ha-server-0   2/2       Running   0          1h
pod/rc-redis-ha-server-1   2/2       Running   0          1h
pod/rc-redis-ha-server-2   2/2       Running   0          1h

NAME                             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)              AGE
service/kubernetes               ClusterIP   10.96.0.1        <none>        443/TCP              23d
service/rc-redis-ha              ClusterIP   None             <none>        6379/TCP,26379/TCP   1h
service/rc-redis-ha-announce-0   ClusterIP   10.105.187.154   <none>        6379/TCP,26379/TCP   1h
service/rc-redis-ha-announce-1   ClusterIP   10.107.36.58     <none>        6379/TCP,26379/TCP   1h
service/rc-redis-ha-announce-2   ClusterIP   10.98.38.214     <none>        6379/TCP,26379/TCP   1h

NAME                                  DESIRED   CURRENT   AGE
statefulset.apps/rc-redis-ha-server   3         3         1h

我尝试使用 Java 应用程序访问 redis-ha,它使用 lettuce 驱动程序连接到 redis。访问redis的示例java代码,

package io.c12.bala.lettuce;

import io.lettuce.core.RedisClient;
import io.lettuce.core.api.StatefulRedisConnection;
import io.lettuce.core.api.sync.RedisCommands;

import java.util.logging.Logger;


public class RedisClusterConnect {

    private static final Logger logger = Logger.getLogger(RedisClusterConnect.class.getName());
    public static void main(String[] args) {
        logger.info("Starting test");

        // Syntax: redis-sentinel://[password@]host[:port][,host2[:port2]][/databaseNumber]#sentinelMasterId
        RedisClient redisClient = RedisClient.create("redis-sentinel://rc-redis-ha:26379/0#mymaster");
        StatefulRedisConnection<String, String> connection = redisClient.connect();


        RedisCommands<String, String> command = connection.sync();
        command.set("Hello", "World");
        logger.info("Ran set command successfully");
        logger.info("Value from Redis - " + command.get("Hello"));

        connection.close();
        redisClient.shutdown();
    }
}

我将应用程序打包为可运行的 jar,创建了一个容器并将其推送到运行 redis 的同一个 kubernetes 集群。应用程序现在抛出错误。

Exception in thread "main" io.lettuce.core.RedisCommandExecutionException: NOREPLICAS Not enough good replicas to write.
        at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:135)
        at io.lettuce.core.LettuceFutures.awaitOrCancel(LettuceFutures.java:122)
        at io.lettuce.core.FutureSyncInvocationHandler.handleInvocation(FutureSyncInvocationHandler.java:69)
        at io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80)
        at com.sun.proxy.$Proxy0.set(Unknown Source)
        at io.c12.bala.lettuce.RedisClusterConnect.main(RedisClusterConnect.java:22)
Caused by: io.lettuce.core.RedisCommandExecutionException: NOREPLICAS Not enough good replicas to write.
        at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:135)
        at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:108)
        at io.lettuce.core.protocol.AsyncCommand.completeResult(AsyncCommand.java:120)
        at io.lettuce.core.protocol.AsyncCommand.complete(AsyncCommand.java:111)
        at io.lettuce.core.protocol.CommandHandler.complete(CommandHandler.java:646)
        at io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:604)
        at io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:556)

我也尝试使用 jedis 驱动程序和 springboot 应用程序,从 Redis-ha 集群中得到同样的错误。

**更新** 当我在 redis-cli 中运行 info 命令时,我得到了

connected_slaves:2
min_slaves_good_slaves:0

似乎奴隶的行为不正常。当切换到 min-slaves-to-write: 0 时。能够读取和写入 Redis 集群。

感谢任何帮助。

最佳答案

看来您必须编辑 redis-ha-configmap configmap 并设置 min-slaves-to-write 0

在所有 redis pod 删除(应用它)之后,它就像一个魅力

所以:

helm install stable/redis-ha
kubectl edit cm redis-ha-configmap # change min-slaves-to-write from 1 to 0
kubectl delete pod redis-ha-0

关于Redis ha helm 图表错误 - NOREPLICAS 没有足够好的副本来写入,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55365775/

相关文章:

kubernetes - 如何让 kubectl 日志与日志一起输出 pod 名称?

kubernetes - 关于: CreateContainerError

kubernetes - 2 个具有共享 Redis 依赖关系的 Helm Charts

kubernetes - 您可以禁用在Charts/目录中扩展的依赖项吗

Kubernetes 无法运行容器

ruby-on-rails - 无法将参数传递给 sidekiq

php - 如何在 redis 中使用 GEOADD 存储额外数据

redis - 按特定值对 Redis 记录进行排序

redis - 如何使用 ReJSON 设置嵌套值(对象)

kubernetes - Rancher:即使在管理零群集的情况下,CPU利用率也很高