[go: nahoru, domu]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add per tenant bytes counter #331

Merged
merged 4 commits into from
Nov 10, 2020

Conversation

dgzlopes
Copy link
Member
@dgzlopes dgzlopes commented Nov 10, 2020

What this PR does:
This PR adds a per tenant bytes counter metric. I went with the distributor approach!

The metric is increased when we cut the complete traces.

Which issue(s) this PR fixes:
Fixes #223

Checklist

  • Tests updated
  • Documentation added
  • CHANGELOG.md updated - the order of entries should be [CHANGE], [FEATURE], [ENHANCEMENT], [BUGFIX]

Signed-off-by: Daniel González Lopes <danielgonzalezlopes@gmail.com>
Signed-off-by: Daniel González Lopes <danielgonzalezlopes@gmail.com>
@mdisibio
Copy link
Contributor

Hi thanks for this submission, it's really good to add metrics that would be useful. Based on the location of this metric, maybe the name "ingester_bytes_written_total" is good? Bytes "processed" might be better reserved for the input side of the ingester, instead of the output side.

@@ -111,7 +118,7 @@ func (i *instance) CutCompleteTraces(cutoff time.Duration, immediate bool) error
if err != nil {
return err
}

i.bytesProcessedTotal.Add(float64(len(out)))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be after the err check? Depends on how we define the metric, as attempted vs succeeded.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I went with the attempted approach. But probably succeded makes more sense here 🤔

btw, I agree with the metric name. Love your proposal! I'll change it.

Copy link
Member
@joe-elliott joe-elliott Nov 10, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I just want to make a note about this spot:

  • This is occurring after a trace is considered complete, so it will be delayed a bit while the trace is aggregating spans.
  • On rollout we'd expect to see this metric to spike while ingesters are flushing their in memory traces to disk
  • This metric is post RF. So it will be ~1x, ~2x or ~3x the number of pushed bytes depending on that setting.

Also, I agree with @mdisibio. May as well put this after the Write to headblock has succeeded.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved :)

Signed-off-by: Daniel González Lopes <danielgonzalezlopes@gmail.com>
Signed-off-by: Daniel González Lopes <danielgonzalezlopes@gmail.com>
Copy link
Member
@joe-elliott joe-elliott left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. Thanks again!

@joe-elliott joe-elliott merged commit 493406d into grafana:master Nov 10, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add per tenant bytes counter
3 participants