You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Platform: What platform / version? Python 3.12 on Red Hat Enterprise Linux Server release 7.9 (Maipo)
Description:
We recently started using the asyncio version of the and noticed we were getting redis timeout errors (redis.exceptions.TimeoutError: Timeout reading from xx.xx.xx.xx:30433) a lot more frequently. The actual time it took redis to respond with data hadn't actualy changed, but some of the requests would just timeout. I was able to narrow the error down to the following:
In the sync version of the code, redis uses the low-level OS socket timeout (driven by the socket_timeout parameter), which controls the timeout for every read operation from the socket. For large responses, since the data is read in chunks, the timeout only applies to each individual read operation.
In the async version of the code, redis explicitly sets a timeout using asyncio.timeout, but wraps it around multiple socket read operations. So, for large responses, even if each individual chunk read operation is done quickly, the total time can exceeds the timeout. To make matters worse, we yield control back to the event loop in between each chunk read, so having lots of concurrent tasks in the loop makes things worse.
Unexpected exception: ConnectionError("Error while reading from redis-server : (110, 'Connection timed out')")
Traceback (most recent call last):
File "/app/.venv/lib/python3.10/site-packages/redis/asyncio/connection.py", line 543, in read_response
response = await self._parser.read_response(
File "/app/.venv/lib/python3.10/site-packages/redis/_parsers/hiredis.py", line 211, in read_response
await self.read_from_socket()
File "/app/.venv/lib/python3.10/site-packages/redis/_parsers/hiredis.py", line 189, in read_from_socket
buffer = await self._stream.read(self._read_size)
File "/usr/local/lib/python3.10/asyncio/streams.py", line 669, in read
await self._wait_for_data('read')
File "/usr/local/lib/python3.10/asyncio/streams.py", line 501, in _wait_for_data
await self._waiter
File "/usr/local/lib/python3.10/asyncio/selector_events.py", line 862, in _read_ready__data_received
data = self._sock.recv(self.max_size)
TimeoutError: [Errno 110] Connection timed out
Do you get the same trace?
I explain my case differently:
asyncio (from epoll/kqueue) realises there is some data to read from the socket;
socket recv() invoked with max_size=256KB
recv() tries to read up to given amount of bytes and fails with TimeoutError
This TimeoutError (https://docs.python.org/3/library/socket.html#socket.timeout) is unexpected because sockets under asyncio have to be nonblocking. Attempts to receive on them have to fail with BlockingIOError which means - try again later, data may appear, kqueue will let us know. If it would take too much time, async_timeout has to be triggered.
So I'm confused why we get TimeoutError during reading from low-level socket.
Version: 4.6.0
Platform: What platform / version? Python 3.12 on Red Hat Enterprise Linux Server release 7.9 (Maipo)
Description:
We recently started using the asyncio version of the and noticed we were getting redis timeout errors (
redis.exceptions.TimeoutError: Timeout reading from xx.xx.xx.xx:30433
) a lot more frequently. The actual time it took redis to respond with data hadn't actualy changed, but some of the requests would just timeout. I was able to narrow the error down to the following:In the sync version of the code, redis uses the low-level OS socket timeout (driven by the
socket_timeout
parameter), which controls the timeout for every read operation from the socket. For large responses, since the data is read in chunks, the timeout only applies to each individual read operation.In the async version of the code, redis explicitly sets a timeout using
asyncio.timeout
, but wraps it around multiple socket read operations. So, for large responses, even if each individual chunk read operation is done quickly, the total time can exceeds the timeout. To make matters worse, we yield control back to the event loop in between each chunk read, so having lots of concurrent tasks in the loop makes things worse.For example, in
AbstractConnection.read_response
async with async_timeout(read_timeout):
is wrapped aroundself._parser.read_response(
. If you're usingHiredisParser
, thenHiredisParser.read_response
callsHiredisParser.read_from_socket
in a loop, andHiredisParser.read_from_socket
callsself._stream.read
. Meaning the async_timeout applying to the entire response, not to each individual socket read operations.The text was updated successfully, but these errors were encountered: