[go: nahoru, domu]

Skip to content

Releases: roboflow/supervision

supervision-0.5.0

10 Apr 22:07
dba4d9f
Compare
Choose a tag to compare

๐Ÿš€ Added

  • Detections.mask to enable segmentation support. (#58)
  • MaskAnnotator to allow easy Detections.mask annotation. (#58)
  • Detections.from_sam to enable native Segment Anything Model (SAM) support. (#58)

๐ŸŒฑ Changed

  • Detections.area behaviour to work not only with boxes but also with masks. (#58)

๐Ÿ† Contributors

supervision-0.4.0

05 Apr 15:33
bc12a8e
Compare
Choose a tag to compare

๐Ÿš€ Added

  • Detections.empty to allow easy creation of empty Detections objects. (#48)
  • Detections.from_roboflow to allow easy creation of Detections objects from Roboflow API inference results. (#56)
  • plot_images_grid to allow easy plotting of multiple images on single plot. (#56)
  • Initial support for Pascal VOC XML format with detections_to_voc_xml method. (#56)

๐ŸŒฑ Changed

  • show_frame_in_notebook refactored and renamed to plot_image. (#56)

๐Ÿ† Contributors

supervision-0.3.2

24 Mar 16:49
2df4261
Compare
Choose a tag to compare

๐ŸŒฑ Changed

  • Drop requirement for class_id in sv.Detections (#50) to make it more flexible

๐Ÿ† Contributors

supervision-0.3.1

14 Mar 13:46
Compare
Choose a tag to compare

๐ŸŒฑ Changed

  • Detections.wth_nms support class agnostic and non-class agnostic case (#36)

๐Ÿ› ๏ธ Fixed

  • PolygonZone throws an exception when the object touches the bottom edge of the image (#41)
  • Detections.wth_nms method throws an exception when Detections is empty (#42)

๐Ÿ† Contributors

supervision-0.3.0

08 Mar 09:49
ac16582
Compare
Choose a tag to compare

๐Ÿš€ Added

New methods in sv.Detections API:

  • from_transformers - convert Object Detection ๐Ÿค— Transformer result into sv.Detections
  • from_detectron2 - convert Detectron2 result into sv.Detections
  • from_coco_annotations - convert COCO annotation into sv.Detections
  • area - dynamically calculated property storing bbox area
  • with_nms - initial implementation (only class agnostic) of sv.Detections NMS

๐ŸŒฑ Changed

  • Make sv.Detections.confidence field Optional.

๐Ÿ† Contributors

supervision-0.2.0

07 Feb 22:11
2e76e3d
Compare
Choose a tag to compare

๐Ÿ”ช Killer features

  • Support for PolygonZone and PolygonZoneAnnotator ๐Ÿ”ฅ
๐Ÿ‘‰ Code example
import numpy as np
import supervision as sv
from ultralytics import YOLO

# initiate polygon zone
polygon = np.array([
    [1900, 1250],
    [2350, 1250],
    [3500, 2160],
    [1250, 2160]
])
video_info = sv.VideoInfo.from_video_path(MALL_VIDEO_PATH)
zone = sv.PolygonZone(polygon=polygon, frame_resolution_wh=video_info.resolution_wh)

# initiate annotators
box_annotator = sv.BoxAnnotator(thickness=4, text_thickness=4, text_scale=2)
zone_annotator = sv.PolygonZoneAnnotator(zone=zone, color=sv.Color.white(), thickness=6, text_thickness=6, text_scale=4)

# extract video frame
generator = sv.get_video_frames_generator(MALL_VIDEO_PATH)
iterator = iter(generator)
frame = next(iterator)

# detect
model = YOLO('yolov8s.pt')
results = model(frame, imgsz=1280)[0]
detections = sv.Detections.from_yolov8(results)
detections = detections[detections.class_id == 0]
zone.trigger(detections=detections)

# annotate
box_annotator = sv.BoxAnnotator(thickness=4, text_thickness=4, text_scale=2)
labels = [f"{model.names[class_id]} {confidence:0.2f}" for _, confidence, class_id, _ in detections]
frame = box_annotator.annotate(scene=frame, detections=detections, labels=labels)
frame = zone_annotator.annotate(scene=frame)

supervision-0-2-0

  • Advanced vs.Detections filtering with pandas-like API.
detections = detections[(detections.class_id == 0) & (detections.confidence > 0.5)]
  • Improved integration with YOLOv5 and YOLOv8 models.
import torch
import supervision as sv

model = torch.hub.load('ultralytics/yolov5', 'yolov5x6')
results = model(frame, size=1280)
detections = sv.Detections.from_yolov5(results)
from ultralytics import YOLO
import supervision as sv

model = YOLO('yolov8s.pt')
results = model(frame, imgsz=1280)[0]
detections = sv.Detections.from_yolov8(results)

๐Ÿš€ Added

  • supervision.get_polygon_center function - takes in a polygon as a 2-dimensional numpy.ndarray and returns the center of the polygon as a Point object
  • supervision.draw_polygon function - draw a polygon on a scene
  • supervision.draw_text function - draw a text on a scene
  • supervision.ColorPalette.default() - class method - to generate default ColorPalette
  • supervision.generate_2d_mask function - generate a 2D mask from a polygon
  • supervision.PolygonZone class - to define polygon zones and validate if supervision.Detections are in the zone
  • supervision.PolygonZoneAnnotator class - to draw supervision.PolygonZone on scene

๐ŸŒฑ Changed

  • VideoInfo API - change the property name resolution -> resolution_wh to make it more descriptive; convert VideoInfo to dataclass
  • process_frame API - change argument name frame -> scene to make it consistent with other classes and methods
  • LineCounter API - rename class LineCounter -> LineZone to make it consistent with PolygonZone
  • LineCounterAnnotator API - rename class LineCounterAnnotator -> LineZoneAnnotator

๐Ÿ† Contributors

supervision-0.1.0

19 Jan 00:59
0e4c97a
Compare
Choose a tag to compare
supervision-0.1.0 Pre-release
Pre-release

๐Ÿš€ Added

  • โ“’ Add project license
  • ๐ŸŽจ DEFAULT_COLOR_PALETTE, Color, and ColorPalette classes
  • ๐Ÿ“ initial implementation of Point, Vector, and Rect classes
  • ๐ŸŽฌ VideoInfo and VideoSink classes as well as get_video_frames_generator
    -๐Ÿ““ show_frame_in_notebook util
  • ๐Ÿ–Œ๏ธ draw_line, draw_rectangle, draw_filled_rectangle utils added
  • ๐Ÿ“ฆ Initial version Detections and BoxAnnotator added
  • ๐Ÿงฎ initial implementation of LineCounter and LineCounterAnnotator classes

๐Ÿ† Contributors

@SkalskiP