[go: nahoru, domu]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add automated benchmarks, stress testing, and other analyses #457

Open
jpsamaroo opened this issue Nov 20, 2023 · 0 comments
Open

Add automated benchmarks, stress testing, and other analyses #457

jpsamaroo opened this issue Nov 20, 2023 · 0 comments

Comments

@jpsamaroo
Copy link
Member

As Dagger is a very complicated set of interacting components and APIs, it would be very useful to be able to track Dagger's performance, scalability, and latency over time to ensure that we don't introduce unexpected regressions, and to be able to make claims about performance and suitability with some confidence.

To that end, I believe it would be valuable to, on every merge to master:

  • Run the full benchmark suite on various configurations
  • Stress-test under various configurations to find broken or buggy behavior
  • Perform automated profiling to find the current set of performance hotspots
  • Track precompile and loading latency

To make the collected information useful, we should automatically export the associated data to some persistent storage (say, S3) in raw form, together with any generated plots or aggregate metrics. We can use something like https://github.com/SciML/SciMLBenchmarks.jl/blob/84462b8f1e5c974df9f396ca4d9b4900e1108a21/.buildkite/run_benchmark.yml to upload to S3, and then provide a script or code to download and analyze this data.

An extra bonus would be to publish this data to https://daggerjl.ai/ so that we can show off our performance gains over time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant