Hi, all
We test our production kafka, and getting such error
[2015-01-15 19:03:45,057] ERROR Error in acceptor (kafka.network.Acceptor)
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.accept(
ServerSocketChannelImpl.java:241)
at kafka.network.Acceptor.accept(SocketServer.scala:200)
at kafka.network.Acceptor.run(SocketServer.scala:154)
at java.lang.Thread.run(Thread.java:745)
I noticed some other developers had similar issues, one suggestion was "
Without knowing the intricacies of Kafka, i think the default open file
descriptors is 1024 on unix. This can be changed by setting a higher ulimit
value ( typically 8192 but sometimes even 100000 ).
Before modifying the ulimit I would recommend you check the number of
sockets stuck in TIME_WAIT mode. In this case, it looks like the broker has
too many open sockets. This could be because you have a rogue client
connecting and disconnecting repeatedly.
You might have to reduce the TIME_WAIT state to 30 seconds or lower.
We increase the open file handles by doing this:
insert "kafka - nofile 100000" in /etc/security/limits.conf
Is that right to change the open file descriptors? In addition, it says to
reduce the TIME_WAIT, where about to change this state? Or any other
solution for this issue?
thanks
Alec Li
We test our production kafka, and getting such error
[2015-01-15 19:03:45,057] ERROR Error in acceptor (kafka.network.Acceptor)
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.accept(
ServerSocketChannelImpl.java:241)
at kafka.network.Acceptor.accept(SocketServer.scala:200)
at kafka.network.Acceptor.run(SocketServer.scala:154)
at java.lang.Thread.run(Thread.java:745)
I noticed some other developers had similar issues, one suggestion was "
Without knowing the intricacies of Kafka, i think the default open file
descriptors is 1024 on unix. This can be changed by setting a higher ulimit
value ( typically 8192 but sometimes even 100000 ).
Before modifying the ulimit I would recommend you check the number of
sockets stuck in TIME_WAIT mode. In this case, it looks like the broker has
too many open sockets. This could be because you have a rogue client
connecting and disconnecting repeatedly.
You might have to reduce the TIME_WAIT state to 30 seconds or lower.
We increase the open file handles by doing this:
insert "kafka - nofile 100000" in /etc/security/limits.conf
Is that right to change the open file descriptors? In addition, it says to
reduce the TIME_WAIT, where about to change this state? Or any other
solution for this issue?
thanks
Alec Li