[go: nahoru, domu]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add from_inference to KeyPoints #1147

Merged
merged 8 commits into from
Apr 29, 2024
Merged

Conversation

LinasKo
Copy link
Collaborator
@LinasKo LinasKo commented Apr 26, 2024

Description

from_inference.

Tested with:

  • remote server, via inference_sdk.InferenceHTTPClient client
  • Locally running inference server, same client
  • Model from inference.get_model
  • Model from roboflow-python

Untested:

  • If there's a way to put a local model on a local inference server without making any network requests, I don't know about that at all.

⚠️ Tests often involve poorly trained models or incompatible images. Still, I looked at every results state I could find, for each input source (no detections, some detections).
⚠️ roboflow has some inconsistencies in the response (e.g. saying it's a ClassificationModel, but the response is mostly the same. User also needs to call result = model.predict(img_url, hosted=True).json()["predictions"][0] before calling from_inference, and that's not documented here.

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)

How has this change been tested, please provide a testcase or example of how you tested the change?

It's expected that your prod roboflow api key will be set as API_KEY env var.

import os
from pathlib import Path
import numpy as np
import cv2
from prettyprinter import pprint as pp
import requests

import supervision as sv
from roboflow import Roboflow


from dotenv import load_dotenv
load_dotenv(override=True)


def download_img(url: str, out_path: Path) -> None:
    if out_path.exists():
        print(f"File already exists: {out_path}")
        return
    r = requests.get(url)
    with open(out_path, "wb") as f:
        f.write(r.content)


rf = Roboflow(api_key=os.getenv("ROBOFLOW_API_KEY"))
project = rf.workspace("nicolai-hoirup-nielsen").project("horse-pose")
model = project.version(1).model

img_url = "https://t3.ftcdn.net/jpg/01/73/37/16/360_F_173371622_02A2qGqjhsJ5SWVhUPu0t9O9ezlfvF8l.jpg"
img_path = Path("raccoon.jpg")
download_img(img_url, img_path)
img = cv2.imread(str(img_path))

result = model.predict(img_url, hosted=True).json()["predictions"][0]
keypoints = sv.KeyPoints.from_inference(result)

ann_point_large = sv.VertexAnnotator(color=sv.Color.ROBOFLOW, radius=5)
ann_point_small = sv.VertexAnnotator(color=sv.Color.WHITE, radius=3)

# Option 1: Use a predefined skeleton
ann_skeleton = sv.EdgeAnnotator(
    color=sv.Color.ROBOFLOW,
    thickness=5,
    # edges=Skeleton.COCO.value
)

# Option 2: No skeleton
# ann_skeleton = sv.EdgeAnnotator(
#     color=sv.Color.ROBOFLOW,
#     thickness=5,
#     edges=[]
# )

# Option 3: Figure out automatically
# ann_skeleton = sv.EdgeAnnotator(
#     color=sv.Color.ROBOFLOW,
#     thickness=5
# )

# Option 4: Take a guess (connect sequential points)
# TODO: remove COCO from Skeleton before running
# ann_skeleton = sv.EdgeAnnotator(
#     color=sv.Color.ROBOFLOW,
#     thickness=5
# )

# Draw
try:
    ann_skeleton.annotate(img, keypoints)
    ann_point_large.annotate(img, keypoints)
    ann_point_small.annotate(img, keypoints)
except Exception as e:
    print("Caught exception while annotating: \n", e)

img = cv2.resize(img, (1024, 680))
cv2.imshow('frame', img)
cv2.waitKey(0)

Inference:

import os
import numpy as np
import cv2

import supervision as sv
from supervision.keypoint.skeletons import Skeleton
from supervision.assets import download_assets, VideoAssets
from inference_sdk import InferenceHTTPClient
from inference import get_model

from dotenv import load_dotenv
load_dotenv(override=True)


def do_inference_http(img: np.ndarray) -> dict:
    API_KEY = os.getenv("ROBOFLOW_API_KEY")
    assert API_KEY, "Please set API_KEY"

    client = InferenceHTTPClient(
        api_url="http://localhost:9001",
        api_key=API_KEY
    ).select_api_v0()

    result = client.infer(img, model_id="horse-pose/3")
    assert type(result) != list, "I assume we don't support this"
    return result

def do_inference_local(img: np.ndarray) -> dict:
    model = get_model("horse-pose/3")
    result = model.infer(img)[0]
    return result

download_assets(VideoAssets.PEOPLE_WALKING)


# cap = cv2.VideoCapture(0)
cap = cv2.VideoCapture(VideoAssets.PEOPLE_WALKING.value)
i = 0
while True:
    ret, frame = cap.read()
    if not ret:
        continue

    result = do_inference_local(frame)
    # pp(result.dict())
    keypoints = sv.KeyPoints.from_inference(result)

    ann_point_large = sv.VertexAnnotator(color=sv.Color.ROBOFLOW, radius=5)
    ann_point_small = sv.VertexAnnotator(color=sv.Color.WHITE, radius=3)

    # Option 1: Use a predefined skeleton
    ann_skeleton = sv.EdgeAnnotator(
        color=sv.Color.ROBOFLOW,
        thickness=5,
        edges=Skeleton.COCO.value
    )

    # Option 2: No skeleton
    # ann_skeleton = sv.EdgeAnnotator(
    #     color=sv.Color.ROBOFLOW,
    #     thickness=5,
    #     edges=[]
    # )

    # Option 3: Figure out automatically
    # ann_skeleton = sv.EdgeAnnotator(
    #     color=sv.Color.ROBOFLOW,
    #     thickness=5
    # )

    # Option 4: Take a guess (connect sequential points)
    # TODO: remove COCO from Skeleton before running
    # ann_skeleton = sv.EdgeAnnotator(
    #     color=sv.Color.ROBOFLOW,
    #     thickness=5
    # )

    # Draw
    try:
        ann_skeleton.annotate(frame, keypoints)
        ann_point_large.annotate(frame, keypoints)
        ann_point_small.annotate(frame, keypoints)
    except Exception as e:
        print("Caught exception while annotating: \n", e)

    frame = cv2.resize(frame, (1024, 680))
    cv2.imshow('frame', frame)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()

Any specific deployment considerations

For example, documentation changes, usability, usage/costs, secrets, etc.

Docs

  • Docs updated? What were the changes:

@LinasKo LinasKo requested a review from SkalskiP April 26, 2024 15:19
from inference import get_model

image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = get_model(model_id="yolov8s-640")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem with this doc string is that "yolov8s-640" is an object detection model.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch. Putting a placeholder "<POSE_MODEL_ID>" until we have a well-trained COCO human-pose on prod.

@SkalskiP
Copy link
Collaborator

@LinasKo looks lie there are some conflicts ;)

Linas Kondrackis and others added 3 commits April 29, 2024 09:46
* Example incorrectly suggested yolov8s-640
* Replaced with "<POSE_MODEL_ID>" until we have a COCO human pose model
  on prod. I don't think "horse-pose/3" is helpful - the use case is
  uncommon and the training quality is poor.
@LinasKo
Copy link
Collaborator Author
LinasKo commented Apr 29, 2024

Solved.
@SkalskiP, ready for review

@LinasKo
Copy link
Collaborator Author
LinasKo commented Apr 29, 2024

@SkalskiP Once again, ready for review.

I've tested that it works both inference_sdk, inference, roboflow.

@SkalskiP
Copy link
Collaborator

I tested this PR using this Colab: https://colab.research.google.com/drive/1udslR-XHRRcfT4CPkdEoR_O6mwiNk3kr?usp=sharing everything works.

@SkalskiP SkalskiP merged commit 71200a5 into develop Apr 29, 2024
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants