You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The only workaround I've found is to immediately create a new consumer instance any time a fetch times out. That would read from the last committed offset.
One possible way to handle this is discussed in https://www.databasesandlife.com/idempotency/ under Alternative approaches. The client would include a random request ID with each call. The server would keep a cache keyed on client instance IDs + request ID (items could expire relatively quickly). When retrying, the client would send the same request ID; the server would get a cache hit and would respond with the contents of the cache (instead of talking to Kafka).
https://docs.confluent.io/current/kafka-rest/api.html#get--consumers-(string-group_name)-instances-(string-instance)-records appears to have the potential for data loss. If a GET response is lost, the consumer cannot retry the request without the possibility of losing messages.
The only workaround I've found is to immediately create a new consumer instance any time a fetch times out. That would read from the last committed offset.
Proof-of-concept:
(Run against https://github.com/confluentinc/cp-docker-images/tree/5.3.0-post/examples/cp-all-in-one with the appropriate topic created)
The text was updated successfully, but these errors were encountered: