java - 如何更改 KafkaStream 的日志级别

标签 java apache-kafka log4j apache-kafka-streams

我是 kafkaStream 的新手,我正在开发一个 Stream,但是当我启动我的应用程序时,很多日志都在记录。

例如,如何将日志级别从 Debbug 更改为 Info。

谢谢你。

6:54:12.720 [kafka-coordinator-heartbeat-thread | 97527H] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending Heartbeat request to coordinator kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 2147483646 rack: null)
16:54:12.721 [kafka-coordinator-heartbeat-thread | 97527H] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v2 to send HEARTBEAT {group_id=97527H,generation_id=2,member_id=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer-9fefdaea-868a-4b22-a1fd-ad3564c3b019} with correlation id 36 to node 2147483646
16:54:12.725 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Received successful Heartbeat response
16:54:13.020 [kafka-coordinator-heartbeat-thread | 97527H] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending Heartbeat request to coordinator kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 2147483646 rack: null)
16:54:13.021 [kafka-coordinator-heartbeat-thread | 97527H] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v2 to send HEARTBEAT {group_id=97527H,generation_id=2,member_id=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer-9fefdaea-868a-4b22-a1fd-ad3564c3b019} with correlation id 37 to node 2147483646
16:54:13.023 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Received successful Heartbeat response
16:54:13.049 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Node 2 sent a full fetch response that created a new incremental fetch session 1486821637 with 1 response partition(s)
16:54:13.049 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Fetch READ_UNCOMMITTED at offset 0 for partition TOPIC-DEV-ACH-0 returned fetch data (error=NONE, highWaterMark=0, lastStableOffset = -1, logStartOffset = 0, preferredReadReplica = absent, abortedTransactions = null, recordsSizeInBytes=0)
16:54:13.050 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name TOPIC-DEV-ACH-0.records-lag
16:54:13.050 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name TOPIC-DEV-ACH-0.records-lead
16:54:13.050 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Added READ_UNCOMMITTED fetch request for partition TOPIC-DEV-ACH-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=kafka-2.broker-ttech-apd.ttech-broker.cloud.maif.local:24983 (id: 2 rack: bat_G), epoch=-1}} to node kafka-2.broker-ttech-apd.ttech-broker.cloud.maif.local:24983 (id: 2 rack: bat_G)
16:54:13.051 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Built incremental fetch (sessionId=1486821637, epoch=1) for node 2. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
16:54:13.051 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(TOPIC-DEV-ACH-0)) to broker kafka-2.broker-ttech-apd.ttech-broker.cloud.maif.local:24983 (id: 2 rack: bat_G)
16:54:13.051 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v8 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1486821637,session_epoch=1,topics=[],forgotten_topics_data=[]} with correlation id 38 to node 2
16:54:13.160 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Node 1 sent an incremental fetch response for session 1744320104 with 0 response partition(s), 1 implied partition(s)
16:54:13.161 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Added READ_UNCOMMITTED fetch request for partition TOPIC-DEV-ACH-1 at position FetchPosition{offset=1, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 1 rack: bat_I), epoch=-1}} to node kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 1 rack: bat_I)
16:54:13.161 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Built incremental fetch (sessionId=1744320104, epoch=2) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
16:54:13.161 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(TOPIC-DEV-ACH-1)) to broker kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 1 rack: bat_I)
16:54:13.161 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v8 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1744320104,session_epoch=2,topics=[],forgotten_topics_data=[]} with correlation id 39 to node 1
16:54:13.320 [kafka-coordinator-heartbeat-thread | 97527H] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending Heartbeat request to coordinator kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 2147483646 rack: null)
16:54:13.320 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v2 to send HEARTBEAT {group_id=97527H,generation_id=2,member_id=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer-9fefdaea-868a-4b22-a1fd-ad3564c3b019} with correlation id 40 to node 2147483646
16:54:13.322 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Received successful Heartbeat response
16:54:13.552 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Node 2 sent an incremental fetch response for session 1486821637 with 0 response partition(s), 1 implied partition(s)
16:54:13.552 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Added READ_UNCOMMITTED fetch request for partition TOPIC-DEV-ACH-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=kafka-2.broker-ttech-apd.ttech-broker.cloud.maif.local:24983 (id: 2 rack: bat_G), epoch=-1}} to node kafka-2.broker-ttech-apd.ttech-broker.cloud.maif.local:24983 (id: 2 rack: bat_G)
16:54:13.552 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Built incremental fetch (sessionId=1486821637, epoch=2) for node 2. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
16:54:13.552 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(TOPIC-DEV-ACH-0)) to broker kafka-2.broker-ttech-apd.ttech-broker.cloud.maif.local:24983 (id: 2 rack: bat_G)
16:54:13.552 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v8 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1486821637,session_epoch=2,topics=[],forgotten_topics_data=[]} with correlation id 41 to node 2
16:54:13.620 [kafka-coordinator-heartbeat-thread | 97527H] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending Heartbeat request to coordinator kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 2147483646 rack: null)
16:54:13.620 [kafka-coordinator-heartbeat-thread | 97527H] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v2 to send HEARTBEAT {group_id=97527H,generation_id=2,member_id=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer-9fefdaea-868a-4b22-a1fd-ad3564c3b019} with correlation id 42 to node 2147483646
16:54:13.626 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Received successful Heartbeat response
16:54:13.664 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Node 1 sent an incremental fetch response for session 1744320104 with 0 response partition(s), 1 implied partition(s)
16:54:13.664 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Added READ_UNCOMMITTED fetch request for partition TOPIC-DEV-ACH-1 at position FetchPosition{offset=1, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 1 rack: bat_I), epoch=-1}} to node kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 1 rack: bat_I)
16:54:13.665 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Built incremental fetch (sessionId=1744320104, epoch=3) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
16:54:13.665 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(TOPIC-DEV-ACH-1)) to broker kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 1 rack: bat_I)
16:54:13.665 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v8 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1744320104,session_epoch=3,topics=[],forgotten_topics_data=[]} with correlation id 43 to node 1
16:54:13.921 [kafka-coordinator-heartbeat-thread | 97527H] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending Heartbeat request to coordinator kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 2147483646 rack: null)
16:54:13.922 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v2 to send HEARTBEAT {group_id=97527H,generation_id=2,member_id=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer-9fefdaea-868a-4b22-a1fd-ad3564c3b019} with correlation id 44 to node 2147483646
16:54:13.925 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Received successful Heartbeat response
16:54:14.054 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Node 2 sent an incremental fetch response for session 1486821637 with 0 response partition(s), 1 implied partition(s)
16:54:14.055 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Added READ_UNCOMMITTED fetch request for partition TOPIC-DEV-ACH-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=kafka-2.broker-ttech-apd.ttech-broker.cloud.maif.local:24983 (id: 2 rack: bat_G), epoch=-1}} to node kafka-2.broker-ttech-apd.ttech-broker.cloud.maif.local:24983 (id: 2 rack: bat_G)
16:54:14.056 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Built incremental fetch (sessionId=1486821637, epoch=3) for node 2. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
16:54:14.056 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(TOPIC-DEV-ACH-0)) to broker kafka-2.broker-ttech-apd.ttech-broker.cloud.maif.local:24983 (id: 2 rack: bat_G)
16:54:14.056 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v8 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1486821637,session_epoch=3,topics=[],forgotten_topics_data=[]} with correlation id 45 to node 2
16:54:14.167 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Node 1 sent an incremental fetch response for session 1744320104 with 0 response partition(s), 1 implied partition(s)
16:54:14.168 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Added READ_UNCOMMITTED fetch request for partition TOPIC-DEV-ACH-1 at position FetchPosition{offset=1, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 1 rack: bat_I), epoch=-1}} to node kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 1 rack: bat_I)
16:54:14.168 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Built incremental fetch (sessionId=1744320104, epoch=4) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
16:54:14.168 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(TOPIC-DEV-ACH-1)) to broker kafka-1.broker-ttech-apd.ttech-broker.cloud.maif.local:14983 (id: 1 rack: bat_I)
16:54:14.168 [97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=97527H-54b434f8-fd4c-439e-9814-0c8133556a1f-StreamThread-1-consumer, groupId=97527H] Using older server API v8 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1744320104,session_epoch=4,topics=[],forgotten_topics_data=[]} with correlation id 46 to node 1

最佳答案

这是 Java 应用程序的 log4j 配置,并非特定于 Kafka

添加或更改 log4j.properties文件在您的 src/main/resources应用程序的文件夹。

将根级别设置为 INFO 而不是 DEBUG

log4j.rootLogger=INFO, stdout

log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n

(或为 logback 找到适当的配置/您使用的任何日志库)

关于java - 如何更改 KafkaStream 的日志级别,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58629398/

相关文章:

java - 在 log4j.properties 文件 SMTP Appender 中隐藏或加密密码

java - 如何在log4j2 xml中配置WriterAppender?

azure - Kafka 连接器可以与事件中心代理一起使用吗?

java - Flink Kafka - 如何使应用程序并行运行?

java - 增加 SequenceFileInputFormat 的分割数

Java:将消息发送到具有多个线程的 JMS 队列

azure - 卡夫卡与 SignalR

java - 用不同的语言反序列化

java - 如何迭代或读取选定列上的所有行

java - Request Param 的可选参数是一种不好的做法吗?