[go: nahoru, domu]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for TensorRT 10 #68715

Draft
wants to merge 12 commits into
base: master
Choose a base branch
from
Draft

Commits on May 28, 2024

  1. [TRT10] Add support for TensorRT 10.0

    Signed-off-by: Meenakshi Venkataraman <meenakshiv@nvidia.com>
    meena-at-work authored and benbarsdell committed May 28, 2024
    Configuration menu
    Copy the full SHA
    94d1bb9 View commit details
    Browse the repository at this point in the history
  2. Use same inc file for TRT10 as for TRT8 and below

    Signed-off-by: Meenakshi Venkataraman <meenakshiv@nvidia.com>
    meena-at-work authored and benbarsdell committed May 28, 2024
    Configuration menu
    Copy the full SHA
    864671b View commit details
    Browse the repository at this point in the history
  3. Configuration menu
    Copy the full SHA
    fc5523f View commit details
    Browse the repository at this point in the history
  4. Fix TF-TRT shape layer int64/int32 mismatches

    - TRT10 changed the output dtype of shape layers from int32 to int64,
      which causes mismatches with other layers. This commit adds cast
      layers to avoid the mismatches.
    - Note that this also adds support for the out_dtype attribute of TF's
      Shape operator.
    benbarsdell committed May 28, 2024
    Configuration menu
    Copy the full SHA
    711dfa5 View commit details
    Browse the repository at this point in the history
  5. Disable implicit batch in TF-TRT tests for TRT10

    - Also avoids runtime warnings about the hasImplicitBatchDimension
      API.
    benbarsdell committed May 28, 2024
    Configuration menu
    Copy the full SHA
    b81ae11 View commit details
    Browse the repository at this point in the history
  6. Configuration menu
    Copy the full SHA
    be1316f View commit details
    Browse the repository at this point in the history
  7. Fix bug in TF-TRT with TRT10 when finding engines

    - It seems that the number of inputs no longer needs to be divided
      by the number of profiles. This manifested as a confusing bug
      because it caused values in the array of min/max/opt to be written
      over the top of existing values instead of at the end, and this
      subsequently prevented shapes from being matched when looking up
      engines.
    benbarsdell committed May 28, 2024
    Configuration menu
    Copy the full SHA
    e99530e View commit details
    Browse the repository at this point in the history
  8. Configuration menu
    Copy the full SHA
    5131ab9 View commit details
    Browse the repository at this point in the history
  9. Configuration menu
    Copy the full SHA
    a0f1c62 View commit details
    Browse the repository at this point in the history
  10. Configuration menu
    Copy the full SHA
    ed93558 View commit details
    Browse the repository at this point in the history
  11. Configuration menu
    Copy the full SHA
    210c7b1 View commit details
    Browse the repository at this point in the history
  12. Change TRT to default to use_dynamic_shape=True

    - use_dynamic_shape=False is not supported since TensorRT 10.0.
    - Also expands the related error message.
    benbarsdell committed May 28, 2024
    Configuration menu
    Copy the full SHA
    699a77b View commit details
    Browse the repository at this point in the history