cassandra - 在单个物理节点上设置两个实例 Cassandra 集群(装有 Windows 10 的笔记本电脑)

标签 cassandra cluster-computing jmx cassandra-3.0

我的主要目标是试验 Cassandra 集群。我有一台笔记本电脑,所以据我了解,我的选择是:(1) 使用一些虚拟化软件(例如 Hyper-V)创建多个虚拟机,然后使用 Cassandra 实例运行每个虚拟机,或者 (2) 使用 docker 创建多个实例Cassandra,或(3)直接运行多个实例。

我认为 (3) 会更快,并为我提供更多见解。所以我尝试了这个(通过关注 https://stackoverflow.com/a/25348301/1029599 )。但当我发现无法更改 JMX 端口时,我遇到了奇怪的情况。更多详细信息如下:

我创建了 Cassandra 3.11.7 的两个文件夹 - 一个位于 C 盘,另一个位于 D 盘。

对于 C 盘文件夹,我编辑了 cassandra.yaml,将“listen_address: localhost”替换为“listen_address: 127.0.0.1”,将“rpc_address: localhost”替换为“rpc_address: 127.0.0.1” '。此外,将种子设置为指向 D-drive-instance,如“-seed: "127.0.0.2"”。我没有编辑 cassandra-env.sh 让 JMX_PORT 成为默认的 7199。

对于 D 驱动器文件夹,我编辑了 cassandra.yaml,将 localhost 指向“127.0.0.2”,并将种子指向“- seeds: "127.0.0.1"”。另外,我编辑了 cassandra-env.sh 以让 JMX_PORT=7200。 令人惊讶的是,当我启动 D 盘的 cassandra 实例时,它总是选择 JMX_PORT 作为 7199 而不是 7200。

Log (relevant portion from the start):
D:\apache-cassandra-3.11.7\bin>.\cassandra 
WARNING! Powershell script execution unavailable.    Please use 'powershell Set-ExecutionPolicy Unrestricted'    on this user-account to run cassandra with fully featured    functionality on this platform. Starting with legacy startup options Starting Cassandra Server INFO  [main] 2020-08-17 17:31:21,632 YamlConfigurationLoader.java:89 - Configuration location: file:/D:/apache-cassandra-3.11.7/conf/cassandra.yaml INFO  [main] 2020-08-17 17:31:22,359 Config.java:534 - Node configuration:[allocate_tokens_for_keyspace=null; authenticator=AllowAllAuthenticator; authorizer=AllowAllAuthorizer; auto_bootstrap=true; auto_snapshot=true; back_pressure_enabled=false; back_pressure_strategy=org.apache.cassandra.net.RateBasedBackPressure{high_ratio=0.9, factor=5, flow=FAST}; batch_size_fail_threshold_in_kb=50; batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; broadcast_address=null; broadcast_rpc_address=null; buffer_pool_use_heap_if_exhausted=true; cas_contention_timeout_in_ms=1000; cdc_enabled=false; cdc_free_space_check_interval_ms=250; cdc_raw_directory=null; cdc_total_space_in_mb=0; check_for_duplicate_rows_during_compaction=true; check_for_duplicate_rows_during_reads=true; client_encryption_options=<REDACTED>; cluster_name=Test Cluster; column_index_cache_size_in_kb=2; column_index_size_in_kb=64; commit_failure_policy=stop; commitlog_compression=null; commitlog_directory=null; commitlog_max_compression_buffers_in_pool=3; commitlog_periodic_queue_size=-1; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_batch_window_in_ms=NaN; commitlog_sync_period_in_ms=10000; commitlog_total_space_in_mb=null; compaction_large_partition_warning_threshold_mb=100; compaction_throughput_mb_per_sec=16; concurrent_compactors=null; concurrent_counter_writes=32; concurrent_materialized_view_writes=32; concurrent_reads=32; concurrent_replicates=null; concurrent_writes=32; counter_cache_keys_to_save=2147483647; counter_cache_save_period=7200; counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; credentials_cache_max_entries=1000; credentials_update_interval_in_ms=-1; credentials_validity_in_ms=2000; cross_node_timeout=false; data_file_directories=[Ljava.lang.String;@235834f2; disk_access_mode=auto; disk_failure_policy=stop; disk_optimization_estimate_percentile=0.95; disk_optimization_page_cross_chance=0.1; disk_optimization_strategy=ssd; dynamic_snitch=true; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ms=600000; dynamic_snitch_update_interval_in_ms=100; enable_materialized_views=true; enable_sasi_indexes=true; enable_scripted_user_defined_functions=false; enable_user_defined_functions=false; enable_user_defined_functions_threads=true; encryption_options=null; endpoint_snitch=SimpleSnitch; file_cache_round_up=null; file_cache_size_in_mb=null; gc_log_threshold_in_ms=200; gc_warn_threshold_in_ms=1000; hinted_handoff_disabled_datacenters=[]; hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; hints_compression=null; hints_directory=null; hints_flush_period_in_ms=10000; incremental_backups=false; index_interval=null; index_summary_capacity_in_mb=null; index_summary_resize_interval_in_minutes=60; initial_token=null; inter_dc_stream_throughput_outbound_megabits_per_sec=200; inter_dc_tcp_nodelay=false; internode_authenticator=null; internode_compression=dc; internode_recv_buff_size_in_bytes=0; internode_send_buff_size_in_bytes=0; key_cache_keys_to_save=2147483647; key_cache_save_period=14400; key_cache_size_in_mb=null; listen_address=127.0.0.2; listen_interface=null; listen_interface_prefer_ipv6=false; listen_on_broadcast_address=false; max_hint_window_in_ms=10800000; max_hints_delivery_threads=2; max_hints_file_size_in_mb=128; max_mutation_size_in_kb=null; max_streaming_retries=3; max_value_size_in_mb=256; memtable_allocation_type=heap_buffers; memtable_cleanup_threshold=null; memtable_flush_writers=0; memtable_heap_space_in_mb=null; memtable_offheap_space_in_mb=null; min_free_space_per_drive_in_mb=50; native_transport_flush_in_batches_legacy=true; native_transport_max_concurrent_connections=-1; native_transport_max_concurrent_connections_per_ip=-1; native_transport_max_concurrent_requests_in_bytes=-1; native_transport_max_concurrent_requests_in_bytes_per_ip=-1; native_transport_max_frame_size_in_mb=256; native_transport_max_negotiable_protocol_version=-2147483648; native_transport_max_threads=128; native_transport_port=9042; native_transport_port_ssl=null; num_tokens=256; otc_backlog_expiration_interval_ms=200; otc_coalescing_enough_coalesced_messages=8; otc_coalescing_strategy=DISABLED; otc_coalescing_window_us=200; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; permissions_cache_max_entries=1000; permissions_update_interval_in_ms=-1; permissions_validity_in_ms=2000; phi_convict_threshold=8.0; prepared_statements_cache_size_mb=null; range_request_timeout_in_ms=10000; read_request_timeout_in_ms=5000; repair_session_max_tree_depth=18; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_scheduler_id=null; request_scheduler_options=null; request_timeout_in_ms=10000; role_manager=CassandraRoleManager; roles_cache_max_entries=1000; roles_update_interval_in_ms=-1; roles_validity_in_ms=2000; row_cache_class_name=org.apache.cassandra.cache.OHCProvider; row_cache_keys_to_save=2147483647; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=127.0.0.2; rpc_interface=null; rpc_interface_prefer_ipv6=false; rpc_keepalive=true; rpc_listen_backlog=50; rpc_max_threads=2147483647; rpc_min_threads=16; rpc_port=9160; rpc_recv_buff_size_in_bytes=null; rpc_send_buff_size_in_bytes=null; rpc_server_type=sync; saved_caches_directory=null; seed_provider=org.apache.cassandra.locator.SimpleSeedProvider{seeds=127.0.0.1}; server_encryption_options=<REDACTED>; slow_query_log_timeout_in_ms=500; snapshot_before_compaction=false; snapshot_on_duplicate_row_detection=false; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_native_transport=true; start_rpc=false; storage_port=7000; stream_throughput_outbound_megabits_per_sec=200; streaming_keep_alive_period_in_secs=300; streaming_socket_timeout_in_ms=86400000; thrift_framed_transport_size_in_mb=15; thrift_max_message_length_in_mb=16; thrift_prepared_statements_cache_size_mb=null; tombstone_failure_threshold=100000; tombstone_warn_threshold=1000; tracetype_query_ttl=86400; tracetype_repair_ttl=604800; transparent_data_encryption_options=org.apache.cassandra.config.TransparentDataEncryptionOptions@5656be13; trickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_ms=60000; unlogged_batch_across_partitions_warn_threshold=10; user_defined_function_fail_timeout=1500; user_defined_function_warn_timeout=500; user_function_timeout_policy=die; windows_timer_interval=1; write_request_timeout_in_ms=2000] INFO  [main] 2020-08-17 17:31:22,361 DatabaseDescriptor.java:381 - DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap INFO  [main] 2020-08-17 17:31:22,366 DatabaseDescriptor.java:439 - Global memtable on-heap threshold is enabled at 503MB INFO  [main] 2020-08-17 17:31:22,367 DatabaseDescriptor.java:443 - Global memtable off-heap threshold is enabled at 503MB INFO  [main] 2020-08-17 17:31:22,538 RateBasedBackPressure.java:123 - Initialized back-pressure with high ratio: 0.9, factor: 5, flow: FAST, window size: 2000. INFO  [main] 2020-08-17 17:31:22,538 DatabaseDescriptor.java:773 - Back-pressure is disabled with strategy org.apache.cassandra.net.RateBasedBackPressure{high_ratio=0.9, factor=5, flow=FAST}. INFO  [main] 2020-08-17 17:31:22,686 JMXServerUtils.java:252 - Configured JMX server at: service:jmx:rmi://127.0.0.1/jndi/rmi://127.0.0.1:7199/jmxrmi INFO  [main] 2020-08-17 17:31:22,700 CassandraDaemon.java:490 - Hostname: DESKTOP-NQ7673H INFO  [main] 2020-08-17 17:31:22,703 CassandraDaemon.java:497 - JVM vendor/version: Java HotSpot(TM) 64-Bit Server VM/1.8.0_261 INFO  [main] 2020-08-17 17:31:22,709 CassandraDaemon.java:498 - Heap size: 1.968GiB/1.968GiB INFO  [main] 2020-08-17 17:31:22,712 CassandraDaemon.java:503 - Code Cache Non-heap memory: init = 2555904(2496K) used = 5181696(5060K) committed = 5242880(5120K) max = 251658240(245760K) INFO  [main] 2020-08-17 17:31:22,714 CassandraDaemon.java:503 - Metaspace Non-heap memory: init = 0(0K) used = 19412472(18957K) committed = 20054016(19584K) max
    = -1(-1K) INFO  [main] 2020-08-17 17:31:22,738 CassandraDaemon.java:503 - Compressed Class Space Non-heap memory: init = 0(0K) used = 2373616(2317K) committed = 2621440(2560K) max = 1073741824(1048576K) INFO  [main] 2020-08-17 17:31:22,739 CassandraDaemon.java:503 - Par Eden Space Heap memory: init = 279183360(272640K) used = 111694624(109076K) committed = 279183360(272640K) max = 279183360(272640K) INFO  [main] 2020-08-17 17:31:22,740 CassandraDaemon.java:503 - Par Survivor Space Heap memory: init = 34865152(34048K) used = 0(0K) committed = 34865152(34048K) max = 34865152(34048K) INFO  [main] 2020-08-17 17:31:22,743 CassandraDaemon.java:503 - CMS Old Gen Heap memory: init
    = 1798569984(1756416K) used = 0(0K) committed = 1798569984(1756416K) max = 1798569984(1756416K) INFO  [main] 2020-08-17 17:31:22,744 CassandraDaemon.java:505 - Classpath: D:\apache-cassandra-3.11.7\conf;D:\apache-cassandra-3.11.7\lib\airline-0.6.jar;D:\apache-cassandra-3.11.7\lib\antlr-runtime-3.5.2.jar;D:\apache-cassandra-3.11.7\lib\apache-cassandra-3.11.7.jar;D:\apache-cassandra-3.11.7\lib\apache-cassandra-thrift-3.11.7.jar;D:\apache-cassandra-3.11.7\lib\asm-5.0.4.jar;D:\apache-cassandra-3.11.7\lib\caffeine-2.2.6.jar;D:\apache-cassandra-3.11.7\lib\cassandra-driver-core-3.0.1-shaded.jar;D:\apache-cassandra-3.11.7\lib\commons-cli-1.1.jar;D:\apache-cassandra-3.11.7\lib\commons-codec-1.9.jar;D:\apache-cassandra-3.11.7\lib\commons-lang3-3.1.jar;D:\apache-cassandra-3.11.7\lib\commons-math3-3.2.jar;D:\apache-cassandra-3.11.7\lib\compress-lzf-0.8.4.jar;D:\apache-cassandra-3.11.7\lib\concurrent-trees-2.4.0.jar;D:\apache-cassandra-3.11.7\lib\concurrentlinkedhashmap-lru-1.4.jar;D:\apache-cassandra-3.11.7\lib\disruptor-3.0.1.jar;D:\apache-cassandra-3.11.7\lib\ecj-4.4.2.jar;D:\apache-cassandra-3.11.7\lib\guava-18.0.jar;D:\apache-cassandra-3.11.7\lib\HdrHistogram-2.1.9.jar;D:\apache-cassandra-3.11.7\lib\high-scale-lib-1.0.6.jar;D:\apache-cassandra-3.11.7\lib\hppc-0.5.4.jar;D:\apache-cassandra-3.11.7\lib\jackson-annotations-2.9.10.jar;D:\apache-cassandra-3.11.7\lib\jackson-core-2.9.10.jar;D:\apache-cassandra-3.11.7\lib\jackson-databind-2.9.10.4.jar;D:\apache-cassandra-3.11.7\lib\jamm-0.3.0.jar;D:\apache-cassandra-3.11.7\lib\javax.inject.jar;D:\apache-cassandra-3.11.7\lib\jbcrypt-0.3m.jar;D:\apache-cassandra-3.11.7\lib\jcl-over-slf4j-1.7.7.jar;D:\apache-cassandra-3.11.7\lib\jctools-core-1.2.1.jar;D:\apache-cassandra-3.11.7\lib\jflex-1.6.0.jar;D:\apache-cassandra-3.11.7\lib\jna-4.2.2.jar;D:\apache-cassandra-3.11.7\lib\joda-time-2.4.jar;D:\apache-cassandra-3.11.7\lib\json-simple-1.1.jar;D:\apache-cassandra-3.11.7\lib\jstackjunit-0.0.1.jar;D:\apache-cassandra-3.11.7\lib\libthrift-0.9.2.jar;D:\apache-cassandra-3.11.7\lib\log4j-over-slf4j-1.7.7.jar;D:\apache-cassandra-3.11.7\lib\logback-classic-1.1.3.jar;D:\apache-cassandra-3.11.7\lib\logback-core-1.1.3.jar;D:\apache-cassandra-3.11.7\lib\lz4-1.3.0.jar;D:\apache-cassandra-3.11.7\lib\metrics-core-3.1.5.jar;D:\apache-cassandra-3.11.7\lib\metrics-jvm-3.1.5.jar;D:\apache-cassandra-3.11.7\lib\metrics-logback-3.1.5.jar;D:\apache-cassandra-3.11.7\lib\netty-all-4.0.44.Final.jar;D:\apache-cassandra-3.11.7\lib\ohc-core-0.4.4.jar;D:\apache-cassandra-3.11.7\lib\ohc-core-j8-0.4.4.jar;D:\apache-cassandra-3.11.7\lib\reporter-config-base-3.0.3.jar;D:\apache-cassandra-3.11.7\lib\reporter-config3-3.0.3.jar;D:\apache-cassandra-3.11.7\lib\sigar-1.6.4.jar;D:\apache-cassandra-3.11.7\lib\slf4j-api-1.7.7.jar;D:\apache-cassandra-3.11.7\lib\snakeyaml-1.11.jar;D:\apache-cassandra-3.11.7\lib\snappy-java-1.1.1.7.jar;D:\apache-cassandra-3.11.7\lib\snowball-stemmer-1.3.0.581.1.jar;D:\apache-cassandra-3.11.7\lib\ST4-4.0.8.jar;D:\apache-cassandra-3.11.7\lib\stream-2.5.2.jar;D:\apache-cassandra-3.11.7\lib\thrift-server-0.3.7.jar;D:\apache-cassandra-3.11.7\build\classes\main;D:\apache-cassandra-3.11.7\build\classes\thrift;D:\apache-cassandra-3.11.7\lib\jamm-0.3.0.jar INFO  [main] 2020-08-17 17:31:22,747 CassandraDaemon.java:507 - JVM Arguments: [-ea,
    -javaagent:D:\apache-cassandra-3.11.7\lib\jamm-0.3.0.jar, -Xms2G, -Xmx2G, -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseParNewGC, -XX:+UseConcMarkSweepGC, -XX:+CMSParallelRemarkEnabled, -XX:SurvivorRatio=8, -XX:MaxTenuringThreshold=1, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Dlogback.configurationFile=logback.xml, -Djava.library.path=D:\apache-cassandra-3.11.7\lib\sigar-bin, -Dcassandra.jmx.local.port=7199, -Dcassandra, -Dcassandra-foreground=yes, -Dcassandra.logdir=D:\apache-cassandra-3.11.7\logs, -Dcassandra.storagedir=D:\apache-cassandra-3.11.7\data] WARN  [main] 2020-08-17 17:31:22,763 StartupChecks.java:169 - JMX is not enabled to receive remote connections. Please see cassandra-env.sh for more info. WARN  [main] 2020-08-17 17:31:22,768 StartupChecks.java:220 - The JVM is not configured to stop on OutOfMemoryError which can cause data corruption. Use one of the following JVM options to configure the behavior on OutOfMemoryError:  -XX:+ExitOnOutOfMemoryError,
    -XX:+CrashOnOutOfMemoryError, or -XX:OnOutOfMemoryError="<cmd args>;<cmd args>" INFO  [main] 2020-08-17 17:31:22,774 SigarLibrary.java:44 - Initializing SIGAR library

您能帮我解决阻止我运行两个实例的端口问题吗? 此外,如果您建议其他方法来完成此任务,我将不胜感激,例如正如我在文章开头提到的选项 1 或 2,或者使用免费帐户的 GCP 或 AWS。

最佳答案

选项 2 - 使用 docker - 非常简单。此处详细描述:cassandra - docker hub

首先,您必须增加 docker 中的资源,因为默认配置可能太小,无法运行 1-2 个以上的节点 - 我在笔记本电脑上使用 8 GB RAM 和 3 个 cpu,并且能够同时启动 3 个 Cassandra 节点。 enter image description here



第一步是创建一个将由 Cassandra 集群使用的桥接网络:

docker network create mynet

然后启动第一个节点:

docker run --name cas1 --network mynet -d cassandra

然后等待大约一分钟,直到节点启动,然后启动另一个节点:

docker run --name cas2 --network mynet -d  -e CASSANDRA_SEEDS=cas1 cassandra

再次启动第三个节点:

docker run --name cas3 --network mynet -d  -e CASSANDRA_SEEDS=cas1 cassandra

最后以交互模式启动另一个 docker 容器实例,在其中运行 cqlsh 并将 cqlsh 连接到您的集群(连接到名为 cas1 的第一个节点):

docker run -it --network mynet --rm cassandra cqlsh cas1
Connected to Test Cluster at cas1:9042.
[cqlsh 5.0.1 | Cassandra 3.11.7 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.
cqlsh>

关于cassandra - 在单个物理节点上设置两个实例 Cassandra 集群(装有 Windows 10 的笔记本电脑),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/63452402/

相关文章:

cassandra - Cassandra 中允许的最大列数是多少

c# - 我可以使用 Cassandra 来存储对象吗?

javascript - Node.js 的 Cluster 模块和 Learnboost 的 Cluster 模块有什么区别?

java - 在多个 JVM 上维护单个实例

Java监控: JMX vs. Servlets

database - 使用cassandra查询和删除聊天收件箱系统

dictionary - 在 Hazelcast 中创建另一个 map 函数?

java - 如何在Jconsole工具中保存连接

jakarta-ee - jmx 无法连接到本地主机

csv - 将 .csv 文件导入 cassandra