Quantcast
Channel: Kafka Timeline
Viewing all 1519 articles
Browse latest View live

Recall: Socket is not connected error while consuming messages using kafka

$
0
0
Pradeep Simha (WT01 - Business Outcome Services) would like to recall the message, "Socket is not connected error while consuming messages using kafka".
The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments.

WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email.

www.wipro.com

Informal Kafka logstash user survey

$
0
0
Hi everyone,

The maintainers of logstash is considering adding my Kafka plugin (https://github.com/joekiller/logstash-kafka ) to the core of logstash.  They are asking at the following link for some +1s from users. Please feel free to chime in. https://groups.google.com/forum/m/#!topic/logstash-users/n1NKYDOfnuU 

-Joe Lawson

Ability to Inject Queue Implementation Async Mode

$
0
0
Kafka Version: 0.8.x

1) Ability to define which messages get drooped (least recently instead of
most recent in queue)
2) Try Unbounded Queue to find out the Upper Limit without drooping any
messages for application (use case Stress test)
3) Priority Blocking Queue ( meaning a single Producer can send messages to
multiple topic and I would like to give Delivery Priority to message for
particular message for topic)

We have use case to support #3 and #1 since we would like to deliver the
Application Heartbeat first then any other event within the queue for any
topics. To lower TCP connections, we only use one producer for 4 topics but
one of the topics has priority for delivery.

Please let me know if this is useful feature to have or not.

Thanks in advance for great support !!

Thanks,

Bhavesh

P.S. Sorry for asking this question again, but last time there was no
conclusion.

Uniform Distribution of Messages for Topic Across Partitions Without Effecting Performance

$
0
0
How to achieve uniform distribution of non-keyed messages per topic across
all partitions?

We have tried to do this uniform distribution across partition using custom
partitioning from each producer instance using round robing (
count(messages) % number of partition for topic). This strategy results in
very poor performance. So we have switched back to random stickiness that
Kafka provide out of box per some interval ( 10 minutes not sure exactly )
per topic.

The above strategy results in consumer side lags sometime for some
partitions because we have some applications/producers producing more
messages for same topic than other servers.

Can Kafka provide out of box uniform distribution by using coordination
among all producers and rely on measure rate such as # messages per minute
or # of bytes produce per minute to achieve uniform distribution and
coordinate stickiness of partition among hundreds of producers for same
topic ?

Thanks,

Bhavesh

Conflict stored data in Zookeeper

$
0
0
Hi, everyone.

I'm using 0.8.1.1, and I have 8 brokers and 3 topics each have 16
partitions and 3 replicas.

I got unseen logs like below. this is occur every 5 seconds.

[2014-08-05 11:11:32,478] INFO conflict in /brokers/ids/2 data:
{"jmx_port":9992,"timestamp":"1407204339990","host":"172.25.63.9","version":1,"port":9092}
stored data:
{"jmx_port":9992,"timestamp":"1407204133312","host":"172.25.63.9","version":1,"port":9092}
(kafka.utils.ZkUtils$)
[2014-08-05 11:11:32,479] INFO I wrote this conflicted ephemeral node
[{"jmx_port":9992,"timestamp":"1407204339990","host":"172.25.63.9","version":1,"port":9092}]
at /brokers/ids/2 a while back in a different session, hence I will backoff
for this node to be deleted by Zookeeper and retry (kafka.utils.ZkUtils$)

I hope to know the what makes this messages.
Is it OK that's not ERROR? How can I remove that message?

Thanks in adavnce.

Java Developer Engineer
Seoul, Korea
Mobile: +82-10-9369-1314
Email: bongyeonkim [ at ] gmail.com
Twitter: http://twitter.com/tigerby
Facebook: http://facebook.com/tigerby
Wiki: http://tigerby.com

high level consumer api blocked forever

$
0
0
Hi, every one.

I got into a strange case that my consumer using high level api worked fine
at first, but couple days later blocked in ConsumerIterator.hasNext(),
while there are pending messages on the topic: with
kafka-console-consumer.sh I can see continuous messages.

Then i connect to consumer process's jdwp port with eclipse, suspend the
consumer thread and ConsumerFetcherThread, found ConsumerIterator blocked
on channel.take, and ConsumerFetcherThread blocked on
PartitionTopicInfo.chunkQueue.put, but channel and chunkQueue are different
object... So ConsumerFetcherThread trying to put a full LinkedBlockingQueue
while ConsumerIterator trying to take a empty LinkedBlockingQueue. Even
more stranger thing is, with one topic has three partitions, the 3
PartitionTopicInfo.chunkQueue and ConsumerIterator.channel are 4 different
objects.

And this case is pretty frequency, has anyone encountered this, or it's
this a known issue? Any information will be helpful.

Thanks a lot!

Consumer is never shutdown

$
0
0
Hi,

I just started with Apache Kafka and wrote a high level consumer program
following the example given here
https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example.

Though, I was able to run the program and consume the messages, I have one
doubt regarding *consumer.shutdown()*. It has never been called. I used the
below piece of code to verify
if (consumer != null) {
System.out.println("shutting down consumer");
consumer.shutdown();

Has someone encountered this before? Also, even if consumer didn't
shutdown, I didn't notice any bottleneck. Is it really needed?

Regards
Anand

Is there a way to stop/pause producers from writing?

$
0
0
Is there a tool or way using the Kafka admin api to tell a producer to
stop/pause writing to a specific topic?

The use case is basically we need to stop writing to a topic, let the
consumers get caught up and then deploy some new code either for the
producers or consumers. Our producers will have a way of internally
queuing up messages until we finish the deploy and bring the topic back up.

Thanks!

kafka.SocketServerStats write rate metrics seem wrong

$
0
0
Hi.
I'm seeing some odd numbers from kafka.SocketServerStats.

Ideally, I'd like to have stats broken down per-topic, e.g.
what's our most written/read topics? For write rates, I've got
a separate process iterating topics every minute, doing this:
(head_offset_now - head_offset_last) / (time_now - time_last)
and inserting that into graphite (was previously inserting the
head_offset, but graphite's derivative() on 2000+ topics
was a bit much).

a graphite sumSeries() across them shows 20~25 MB/s
on a single-instance kafka 0.7.2, which sounds correct,
but when I compare that to the kafka.SocketServerStats
we're collecting in graphite, derivative(TotalBytesWritten)
shows 3.0GB/s, and BytesWrittenPerSecond shows
200KB/s, neither of which jive with my offset rate.

Is anyone aware of any weirdness with those stats?

thanks in advance for any insight,
-neil

Apache webserver access logs + Kafka producer

$
0
0
Hi,

I want to collect apache web server logs in real time and send it to Kafka
server. Is there any existing Producer available to do this operation, If
not can you please provide a way to implement it.

Regards,
Sree.

trying to tune kafka's internal logging - need help...

$
0
0
Hi,

I am trying to get rid of the log files written under “$base_dir/logs”,
folder create by line 26 at “bin/kafka-run-class.sh”.

I use an EC2 machine with small primary disk and it blows away on occasions
when writing to these logs is excessive, and I bumped into a few already
(from Jira it looks like you guys know about them).

Tried to export “LOG_DIR”, “KAFKA_LOG4J_OPTS”, No luck till now…. L

What log4j properties file should be put where to squelch that logging? Is
there any such file?

P.S.

Saw that SCALA_VERSION defaults to 2.8.0 even in the other scala versions
distributables.

Should I set to 2.9.2/2.10/etc?

Are there any other vars to take in account?

10x,

Shlomi

Issue with kafka-server-start.sh

$
0
0
Hi,

I am working with kafka_2.8.0-0.8.1.1. It used to work fine but since this
morning when I try to start the kafka-server after starting zookeeper, I am
getting this error-

INFO I wrote this conflicted ephemeral node
[{"version":1,"brokerid":0,"timestamp":"1407357554030"}] at /controller a
while back in a different session, hence I will backoff for this node to be
deleted by Zookeeper and retry (kafka.utils.ZkUtils$)

[2014-08-06 13:49:22,103] INFO conflict in /controller data:
{"version":1,"brokerid":0,"timestamp":"1407357554030"} stored data:
{"version":1,"brokerid":0,"timestamp":"1407349766568"}
(kafka.utils.ZkUtils$)

I tried to stop both kafka and zookeeper and cleared all the logs and then
tried starting it again but I am still getting this error.

Can you guide me to the root cause?

Thank you very much!

Shikha

consumer rebalance weirdness

$
0
0
We've noticed that some of our consumers are more likely to repeatedly
trigger rebalancing when the app is consuming messages more slowly (e.g.
persisting data to back-end systems, etc.).

If on the other hand we 'fast-forward' the consumer (which essentially
means we tell it to consume but do nothing with the messages until all
caught up), it will never decide to do a rebalance during this time. So it
can go hours without rebalancing while fast forwarding and consuming super
fast, while during normal processing, it might decide to rebalance every
minute or so.

Is there any simple explanation for this?

Usually the trigger for rebalance logged is that a "topic info for path X
has changed to Y, triggering rebalance".

Thanks for any ideas.

We'd like to reduce the rebalancing, as it essentially slows down
consumption each time it happens.

Thanks

Jason

error recovery in multiple thread reading from Kafka with HighLevel api

$
0
0
Folks,
I have a process started at specific time and read from a specific topic.
I am currently using the High Level API(consumer group) to read from
kafka(and will stop once there is nothing in the topic by specifying a
timeout). i am most concerned about error recovery in multiple thread
context. If one thread dies, will other running bolt threads picks up the
failed message? Or I have to start another thread in order to pick up the
failed message? What would be a good practice to ensure the message can be
processed at least once?

Note that all threads are using the same group id.

Thanks,
Chen

Architecture: amount of partitions

$
0
0
Dear all,

I'm new to Kafka, and I'm considering using it for a maybe not very usual
purpose. I want it to be a backend for data synchronization between a
magnitude of devices, which are not always online (mobile and embedded
devices). All the synchronized information belong to some user, and can be
identified by the user id. There are several data types, and a user can have
many entries of each data type coming from many different devices.

This solution has to scale up to hundreds of thousands of users, and, as far
as I understand, Kafka stores every partition in a single file. I've been
thinking about creating a topic for every data type and a separate partition
for every user. Amount of data stored by every user is no more than several
megabytes over the whole lifetime, because the data stored would be keyed
messages, and I'm expecting it to be compacted.

So what I'm wondering is, would Kafka be a right approach for such task, and
if yes, would this architecture (one topic per data type and one partition
per user) scale to specified extent?

Thanks,

Roman.

A wired producer connection timeout issue

$
0
0
A Kafka producer frequently timeout when connecting to a remote Kafka cluster while producers on other machine (same data center) can connect to the Kafka cluster with no problem.  From the monitoring,  the ProductQueueSize is always full and message sent rate is low. We use Kafka 0.8. We set "batch.num.messages=10000" and "queue.buffering.max.ms=5000". 
 
Here is the error message:
[2014-08-08 17:52:02,786] ProducerSendThread producer.SyncProducer ERROR Producer connection to kafka-XXX.com:9092 unsuccessful
java.net.ConnectException: Connection timed out
        at sun.nio.ch.Net.connect(Native Method)
        at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:525)
        at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)
        at kafka.producer.SyncProducer.connect(SyncProducer.scala:146)
        at kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:161)
        at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:68)
        at kafka.producer.SyncProducer.send(SyncProducer.scala:112)
        at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:53)
        at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
        at kafka.producer.async.DefaultEventHandler$$anonfun$handle$1.apply$mcV$sp(DefaultEventHandler.scala:67)
        at kafka.utils.Utils$.swallow(Utils.scala:187)
        at kafka.utils.Logging$class.swallowError(Logging.scala:105)
        at kafka.utils.Utils$.swallowError(Utils.scala:46)
        at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:67)
        at kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:104)
        at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:87)
        at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:67)
        at scala.collection.immutable.Stream.foreach(Stream.scala:548)
        at kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:66)
        at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:44)

Error while producing messages

$
0
0
Hi,

I am trying to do capacity sizing estimate for our kafka cluster. I started with 5 broker cluster and 3 node zk. Used a simple java based producer to send messages to 5 topics that are created in the cluster. I used 2 client machines with 100 worker threads each sending messages continuously. I didn't see any exceptions or issues when all 5 brokers are up. When I take down 3 brokers out of 5, and 4 topics out of 5. I am seeing below in broker logs.

[2014-08-08 16:47:34,977] DEBUG Closing connection from /10.254.243.142:33944 (kafka.network.Processor)

[2014-08-08 16:47:35,179] DEBUG Accepted connection from /10.254.243.142 on /10.66.107.231:9092. sendBufferSize [actual|requested]: [131071|1048576] recvBufferSize [actual|requested]: [131071|1048576] (kafka.network.Acceptor)

[2014-08-08 16:47:35,179] DEBUG Processor 860 listening to new connection from /10.254.243.142:33947 (kafka.network.Processor)

[2014-08-08 16:47:35,179] DEBUG [KafkaApi-1] Error while fetching metadata for [item_topic_0,0]. Possible cause: null (kafka.server.KafkaApis)

[2014-08-08 16:47:35,207] INFO Closing socket connection to /10.254.243.142. (kafka.network.Processor)

[2014-08-08 16:47:35,207] DEBUG Closing connection from /10.254.243.142:33947 (kafka.network.Processor)

[2014-08-08 16:47:35,294] DEBUG Accepted connection from /10.254.243.142 on /10.66.107.231:9092. sendBufferSize [actual|requested]: [131071|1048576] recvBufferSize [actual|requested]: [131071|1048576] (kafka.network.Acceptor)

On the client machine(producer) I am seeing below error. Again only one topic and 2 broker nodes are running.

Exception in thread "pool-1-thread-107" kafka.common.FailedToSendMessageException: Failed to send messages after 3 tries.

Sending message 181171284512 for topic item_topic_0

at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)

at kafka.producer.Producer.send(Producer.scala:76)

at kafka.javaapi.producer.Producer.send(Producer.scala:33)

at com.ebay.cassini.feeder.nrt.kafka.producer.WorkerThread.produceKafkaMessage(WorkerThread.java:33)

at com.ebay.cassini.feeder.nrt.kafka.producer.WorkerThread.run(WorkerThread.java:27)

at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

at java.lang.Thread.run(Thread.java:662)

Exception in thread "pool-1-thread-108" kafka.common.FailedToSendMessageException: Failed to send messages after 3 tries.

Sending message 221252534837 for topic item_topic_0

at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)

at kafka.producer.Producer.send(Producer.scala:76)

at kafka.javaapi.producer.Producer.send(Producer.scala:33)

at com.ebay.cassini.feeder.nrt.kafka.producer.WorkerThread.produceKafkaMessage(WorkerThread.java:33)

at com.ebay.cassini.feeder.nrt.kafka.producer.WorkerThread.run(WorkerThread.java:27)

at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

at java.lang.Thread.run(Thread.java:662)

Not all messages are erroring out. Only a few messages are failing. Any idea what could be going on?

Thanks,

Raj Tanneru

getting socket timeout

$
0
0
Hi Team,

I am bit new to kafka. I was trying to set up kafka 0.8 and connect it to
druid firehose. My console producer is working fine but when I try to
connect to it using java code I am getting socket timeout exception even
when increasing the timeout to 2 min.

We are getting socket timeout when fetching metadata from broker. Here is
the stack trace

[ERROR] 2014-08-10 20:52:02,671 AsyncAppender-Worker-Thread-5
[airpricingservice k.p.async.DefaultEventHandler] - [] Failed to collate
messages by topic, partition due to: fetching topic metadata for topics
[Set(wikipedia)] from broker [ArrayBuffer(id:0,host:localhost,port:9092)]
failed

[ERROR] 2014-08-10 20:52:12,774 AsyncAppender-Worker-Thread-5
[airpricingservice kafka.utils.Utils$] - [] fetching topic metadata for
topics [Set(wikipedia)] from broker
[ArrayBuffer(id:0,host:localhost,port:9092)] failed

kafka.common.KafkaException: fetching topic metadata for topics
[Set(wikipedia)] from broker [ArrayBuffer(id:0,host:localhost,port:9092)]
failed

at kafka.client.ClientUtils$.fetchTopicMetadata(Unknown Source)
~[AirPricingService-1.0.4-SNAPSHOT.jar:na]

at kafka.producer.BrokerPartitionInfo.updateInfo(Unknown Source)
~[AirPricingService-1.0.4-SNAPSHOT.jar:na]

at
kafka.producer.async.DefaultEventHandler$$anonfun$handle$2.apply$mcV$sp(Unknown
Source) ~[AirPricingService-1.0.4-SNAPSHOT.jar:na]

at kafka.utils.Utils$.swallow(Unknown Source)
[AirPricingService-1.0.4-SNAPSHOT.jar:na]

at kafka.utils.Logging$class.swallowError(Unknown Source)
[AirPricingService-1.0.4-SNAPSHOT.jar:na]

at kafka.utils.Utils$.swallowError(Unknown Source)
[AirPricingService-1.0.4-SNAPSHOT.jar:na]

at kafka.producer.async.DefaultEventHandler.handle(Unknown Source)
[AirPricingService-1.0.4-SNAPSHOT.jar:na]

at kafka.producer.Producer.send(Unknown Source)
[AirPricingService-1.0.4-SNAPSHOT.jar:na]

at kafka.javaapi.producer.Producer.send(Unknown Source)
[AirPricingService-1.0.4-SNAPSHOT.jar:na]

at
com.expedia.service.air.pricing.manager.logger.appender.KafkaAppender.append(KafkaAppender.java:69)
[AirPricingService-1.0.4-SNAPSHOT.jar:na]

at
com.expedia.service.air.pricing.manager.logger.appender.KafkaAppender.append(KafkaAppender.java:12)
[AirPricingService-1.0.4-SNAPSHOT.jar:na]

at ch.qos.logback.core.AppenderBase.doAppend(AppenderBase.java:85)
[AirPricingService-1.0.4-SNAPSHOT.jar:na]

at
ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:48)
[AirPricingService-1.0.4-SNAPSHOT.jar:na]

at
ch.qos.logback.core.AsyncAppenderBase$Worker.run(AsyncAppenderBase.java:226)
[AirPricingService-1.0.4-SNAPSHOT.jar:na]

Caused by: java.net.SocketTimeoutException: null

at sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:229)
~[na:1.7.0_40]

at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
~[na:1.7.0_40]

at
java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)
~[na:1.7.0_40]

at kafka.utils.Utils$.read(Unknown Source)
[AirPricingService-1.0.4-SNAPSHOT.jar:na]

at kafka.network.BoundedByteBufferReceive.readFrom(Unknown Source)
~[AirPricingService-1.0.4-SNAPSHOT.jar:na]

at kafka.network.Receive$class.readCompletely(Unknown Source)
~[AirPricingService-1.0.4-SNAPSHOT.jar:na]

at kafka.network.BoundedByteBufferReceive.readCompletely(Unknown Source)
~[AirPricingService-1.0.4-SNAPSHOT.jar:na]

at kafka.network.BlockingChannel.receive(Unknown Source)
~[AirPricingService-1.0.4-SNAPSHOT.jar:na]

at kafka.producer.SyncProducer.liftedTree1$1(Unknown Source)
~[AirPricingService-1.0.4-SNAPSHOT.jar:na]

at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(Unknown
Source) ~[AirPricingService-1.0.4-SNAPSHOT.jar:na]

at kafka.producer.SyncProducer.send(Unknown Source)
~[AirPricingService-1.0.4-SNAPSHOT.jar:na]

... 14 common frames omitted

I am suspecting some config mismatch but my console producer is setting is
same as my java producer.

Thanks,

Aayush

Kafka Consumer not consuming in webMethods.

$
0
0
Hi Jun/Neha/Team,

We are trying to consume from Kafka using webMethods as our consumer. When we start the consumer, fetcher and leader threads went into WAITING state and not consuming anything, when I run the same consumer in my eclipse it works fine.

We are running Kafka-0.8-beta now. I suspect it could be either bug KAFKA-618 or KAFKA-914.

Please find thread dumps attached for your review and let us know how can we fix it.

Thanks,

Balaji

LeaderNotAvailableException

$
0
0
I have a single broker test Kafka instance that was running fine on Friday
(basically out of the box configuration with 2 partitions), now I come back
on Monday and producers are unable to send messages.

What else can i look at to debug, and prevent?

I know how to recover by removing data directories for kafka and zookeeper
to start fresh. But, this isn't the first time this has happened, so I
would like to understand it better to feel more comfortable with kafka.

===================
Producer error (from console produce)
===================
[2014-08-11 19:32:49,781] WARN Error while fetching metadata
[{TopicMetadata for topic mytopic ->
No partition metadata for topic mytopic due to
kafka.common.LeaderNotAvailableException}] for topic [mytopic]: class
kafka.common.LeaderNotAvailableException
(kafka.producer.BrokerPartitionInfo)
[2014-08-11 19:32:49,782] ERROR Failed to collate messages by topic,
partition due to: Failed to fetch topic metadata for topic: mytopic
(kafka.producer.async.DefaultEventHandler)

===============
state-change.log
===============
[2014-08-11 19:12:45,312] TRACE Controller 0 epoch 3 started leader
election for partition [mytopic,0] (state.change.logger)
[2014-08-11 19:12:45,321] ERROR Controller 0 epoch 3 initiated state change
for partition [mytopic,0] from OfflinePartition to OnlinePartition failed
(state.change.logger)
kafka.common.NoReplicaOnlineException: No replica for partition [mytopic,0]
is alive. Live brokers are: [Set()], Assigned replicas are: [List(0)]
at
kafka.controller.OfflinePartitionLeaderSelector.selectLeader(PartitionLeaderSelector.scala:61)
[2014-08-11 19:12:45,312] TRACE Controller 0 epoch 3 started leader
election for partition [mytopic,1] (state.change.logger)
[2014-08-11 19:12:45,321] ERROR Controller 0 epoch 3 initiated state change
for partition [mytopic,1] from OfflinePartition to OnlinePartition failed
(state.change.logger)
kafka.common.NoReplicaOnlineException: No replica for partition [mytopic,1]
is alive. Live brokers are: [Set()], Assigned replicas are: [List(0)]
at
kafka.controller.OfflinePartitionLeaderSelector.selectLeader(PartitionLeaderSelector.scala:61)

===============
controller.log
===============
[2014-08-11 19:12:45,308] DEBUG [OfflinePartitionLeaderSelector]: No broker
in ISR is alive for [mytopic,1]. Pick the leader from the alive assigned
replicas: (kafka.controller.OfflinePartitionLeaderSelector)
[2014-08-11 19:12:45,321] DEBUG [OfflinePartitionLeaderSelector]: No broker
in ISR is alive for [mytopic,0]. Pick the leader from the alive assigned
replicas: (kafka.controller.OfflinePartitionLeaderSelector)
Viewing all 1519 articles
Browse latest View live




Latest Images