[go: nahoru, domu]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gRPC python server handles one RPS while concurrency is set to 1000 #19941

Closed
alirezastack opened this issue Aug 14, 2019 · 1 comment
Closed

Comments

@alirezastack
Copy link

What version of gRPC and what language are you using?

Python 3.6.7
grpcio 1.22.0

What operating system (Linux, Windows,...) and version?

Ubuntu 18.04.2 LTS

What runtime / compiler are you using (e.g. python version or version of gcc)

Python 3.6.7

What did you do?

https://gist.github.com/alirezastack/ec8e4d97dfdfbb8ae1c8724e742bc343

What did you expect to see?

If I set max_workers to 1 and maximum_concurrent_rpcs to 1000 I'd expect to handle concurrent requests until resource gets exhausted.

What did you see instead?

Server works synchronously and receives one request at a time, the other request waits until the first request is served. I used sleep(5) in method call to easily trace the issue.

Make sure you include information that can help us debug (full error message, exception listing, stack trace, logs). This the stack trace when testing with ghz gRPC load testing tool
https://gist.github.com/alirezastack/bbc480fa91f0202bdab96b591e6865cb

Anything else we should know about your project / environment?

Cement Python Framework is a wrapper around gRPC server if that helps.

@gnossen
Copy link
Contributor
gnossen commented Aug 14, 2019

@alirezastack gRPC Python servers are currently all synchronous. That is, each servicer thread you provide in your thread pool handles a single RPC to completion before handling the next. So max_workers effectively dictates your concurrency.

As per the documentation,

maximum_concurrent_rpcs – The maximum number of concurrent RPCs this server will service before returning RESOURCE_EXHAUSTED status, or None to indicate no limit.

So maximum_concurrent_rpcs gives you a way to set an upper bound on the number of RPCs waiting in the server's queue to be serviced by a thread. As such, concurrency is really min(max_workers, maximum_concurrent_rpcs). Although, in general, I would expect maximum_concurrent_rpcs to be higher than max_workers.

If the "1 thread = 1 RPC" model doesn't work for you, we do offer experimental gevent support and we are actively working on asyncio as our standard solution for handling all RPCs on a single thread.

@gnossen gnossen closed this as completed Aug 14, 2019
@lock lock bot locked as resolved and limited conversation to collaborators Nov 12, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants