[go: nahoru, domu]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance results from CLI, Node, Audit and Extension are very different #9957

Closed
rubyzhao opened this issue Nov 11, 2019 · 4 comments
Closed

Comments

@rubyzhao
Copy link
rubyzhao commented Nov 11, 2019

Provide the steps to reproduce

There are 4 ways to use Lighthouse

  1. Audits in Chrome DevTools
  2. Chrome Lighthouse Extension
  3. Node CLI
  4. Node Module

Setup the server

  1. Fetch html
  2. Only change one line: var Loop=2e7 to see the difference.
  3. Open this with Live Server [^1]

Get the 4 different results

  1. F12, Click Audits, Get
    Audit.zip
  2. Run Lighthouse Extension, Get
    Extension.zip
  3. Run Lighthouse CLI: lighthouse http://127.0.0.1:5501/Canvas_Sin.html --view --output-path=LightHouse_Output\Canvas --output=html --only-categories=performance
    Get Canvas.report.html output.zip
  4. Follow Node Module
    Get result
    Node_LightHouseOutput.zip

What is the current behavior?

Please see the part of performance results from 4 different ways. All 4 ways use same setting:
image

  1. Performance from Audits:
    CPU/Memory Power: 680
    image

  2. Performance from Extension:
    CPU/Memory Power: 613
    image

  3. Performance from node CLI:
    CPU/Memory Power: 540
    image

  4. Performance from node Module:
    CPU/Memory Power: 707
    image

What is the expected behavior?

The 4 ways should give very close result. But there are a lot of difference.

  1. Why CPU/Memory is so different for 4 ways? How this is calculated?
  2. Max Potential First Input Delay is huge different, from 20 to 7820ms.
  3. For other performance, First Contentful Paint,First Meaningful Paint,Speed Index,First CPU Idle and Time to Interactive, although Audit is close to Extension; CLI is close to Module, but there is about 1.6 times difference between CLI and Audit.

What is the best way we should use in the future?

Based on the above performance result, Node Module gives the best performance result. Extension shows the worst performance.

Environment Information

  • Lighthouse version: 5.6.0
  • Lighthouse Extension: 5.6.0
  • Chrome version: 78.0.3904.97
  • Node.js version: 12.11.1
  • Operating System: Win10X64 1803
  • VS Code: Version: 1.40.0
  • [^1] Live Server Plugin of VS Code: 5.6.1

Related issues

@patrickhulce
Copy link
Collaborator

Thanks for filing @rubyzhao!

tl;dr of your question - use the CLI because it will be the latest LH version and it uses a clean Chrome profile for most accurate new user results.

Why CPU/Memory is so different for 4 ways? How this is calculated?

It is an ultrasimple benchmark meant to give a very quick baseline for how powerful a machine is. It will vary based on many other factors: current CPU load, other active tabs, battery remaining, etc. The import thing here is the magnitude 10 v. 100 v. 1000, anything 500-700 is roughly all the same.

Max Potential First Input Delay is huge different, from 20 to 7820ms.
For other performance, First Contentful Paint,First Meaningful Paint,Speed Index,First CPU Idle and Time to Interactive, although Audit is close to Extension; CLI is close to Module, but there is about 1.6 times difference between CLI and Audit.

CLI/node module should be the same because they're the same Lighthouse version launching Chrome the same way in the same environment. You're seeing large differences between DevTools/Extension/CLI because they are using different Lighthouse versions with different Chrome profiles and different environments. We have several good documents on the topic of variability that explain some of these differences.

Also FWIW, as a user of this demo page, its performance is highly variable on my machine. When the page itself has varying performance characteristics, the measurements will inevitably be variable as well.

@rubyzhao
Copy link
Author

Thanks for the detail explanation. LH still gives more consistent result when run it more times for the same page than Puppeteer or Console from Chrome.

Would you help on performance from Pupeteer and Console of Chrome? I even use only 1 line web page to dig out the issue. For each repeat test, I will clean the console, reload the page. Please see the detail below:
puppeteer/puppeteer#5110
puppeteer/puppeteer#5114

If I want get TaskDuration, domComplete etc performance metrics, what is the best way/tool you suggest?

Thanks in advance.

@patrickhulce
Copy link
Collaborator

If you want raw timings like that, there's really no better way than running with puppeteer, averaging the results of multiple runs, and discarding outliers, sorry :/

@rubyzhao
Copy link
Author

Thanks for your great help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants