我们正在使用 Hazelcat 1.9.4.4 和 6 台 Tomcat 服务器的集群。我们重新启动了我们的集群,这里是日志的一个片段:
14-Jul-2012 03:25:41 com.hazelcast.nio.InSelector
INFO: /10.152.41.105:5701 [cem-prod] 5701 accepted socket connection from /10.153.26.16:54604
14-Jul-2012 03:25:47 com.hazelcast.cluster.ClusterManager
INFO: /10.152.41.105:5701 [cem-prod]
Members [6] {
Member [10.152.41.101:5701]
Member [10.164.101.143:5701]
Member [10.152.41.103:5701]
Member [10.152.41.105:5701] this
Member [10.153.26.15:5701]
Member [10.153.26.16:5701]
}
我们可以看到 10.153.26.16 已连接到集群,但在它后面的日志中有:
14-Jul-2012 03:28:50 com.hazelcast.impl.ConcurrentMapManager
INFO: /10.152.41.105:5701 [cem-prod] ======= 47: CONCURRENT_MAP_LOCK ========
thisAddress= Address[10.152.41.105:5701], target= Address[10.153.26.16:5701]
targetMember= Member [10.153.26.16:5701], targetConn=Connection [/10.153.26.16:54604 -> Address[10.153.26.16:5701]] live=true, client=false, type=MEMBER, targetBlock=Block [2] owner=Address[10.153.26.16:5701] migrationAddress=null
cemClientNotificationsLock Re-doing [20] times! c:__hz_Locks : null
14-Jul-2012 03:28:55 com.hazelcast.impl.ConcurrentMapManager
INFO: /10.152.41.105:5701 [cem-prod] ======= 57: CONCURRENT_MAP_LOCK ========
thisAddress= Address[10.152.41.105:5701], target= Address[10.153.26.16:5701]
targetMember= Member [10.153.26.16:5701], targetConn=Connection [/10.153.26.16:54604 -> Address[10.153.26.16:5701]] live=true, client=false, type=MEMBER, targetBlock=Block [2] owner=Address[10.153.26.16:5701] migrationAddress=null
cemClientNotificationsLock Re-doing [30] times! c:__hz_Locks : null
在多次重启服务器后(全部一起,停止所有并一个接一个地启动等),我们能够运行系统。 你能解释一下,为什么 Hazelcast 在集群中的节点上无法锁定映射,或者如果该节点不在集群中,为什么它显示为成员? 还有什么建议如何重新启动具有分布式 Hazelcast 结构的 Tomcat 集群(停止所有节点并一起启动,停止并一个接一个地启动,在服务器重新启动之前以某种方式停止 Hazelcast 等?)? 谢谢!
最佳答案
Could you explain, why Hazelcast fails to lock map at the node if it is in cluster
此时 map 可能被其他节点锁定。
自 1.9.4.4 以来也有很多修复和更改,它是相当旧的版本。您应该尝试 2.1+。
关于java - 服务器重启后 Hazelcast/CONCURRENT_MAP_LOCK 出现问题,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/11578755/