0 purchases
rheioyu58 1.2.0
AS-One : A Modular Library for YOLO Object Detection and Object Tracking
Table of Contents
Introduction
Prerequisites
Clone the Repo
Installation
Linux
Windows 10/11
MacOS
Running AS-One
Sample Code Snippets
Model Zoo
1. Introduction
==UPDATE: YOLO-NAS is OUT==
AS-One is a python wrapper for multiple detection and tracking algorithms all at one place. Different trackers such as ByteTrack, DeepSORT or NorFair can be integrated with different versions of YOLO with minimum lines of code.
This python wrapper provides YOLO models in ONNX, PyTorch & CoreML flavors. We plan to offer support for future versions of YOLO when they get released.
This is One Library for most of your computer vision needs.
If you would like to dive deeper into YOLO Object Detection and Tracking, then check out our courses and projects
Watch the step-by-step tutorial
2. Prerequisites
Make sure to install GPU drivers in your system if you want to use GPU . Follow driver installation for further instructions.
Make sure you have MS Build tools installed in system if using windows.
Download git for windows if not installed.
3. Clone the Repo
Navigate to an empty folder of your choice.
git clone https://github.com/augmentedstartups/AS-One.git
Change Directory to AS-One
cd AS-One
4. Installation
For Linux
python3 -m venv .env
source .env/bin/activate
pip install numpy Cython
pip install cython-bbox asone onnxruntime-gpu==1.12.1
pip install super-gradients==3.1.1
# for CPU
pip install torch torchvision
# for GPU
pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113
For Windows 10/11
python -m venv .env
.env\Scripts\activate
pip install numpy Cython
pip install lap
pip install -e git+https://github.com/samson-wang/cython_bbox.git#egg=cython-bbox
pip install asone onnxruntime-gpu==1.12.1
pip install super-gradients==3.1.1
# for CPU
pip install torch torchvision
# for GPU
pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113
or
pip install torch==1.10.1+cu113 torchvision==0.11.2+cu113 torchaudio===0.10.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
For MacOS
python3 -m venv .env
source .env/bin/activate
pip install numpy Cython
pip install cython-bbox asone
pip install super-gradients==3.1.1
# for CPU
pip install torch torchvision
5. Running AS-One
Run main.py to test tracker on data/sample_videos/test.mp4 video
python main.py data/sample_videos/test.mp4
Run in Google Colab
6. Sample Code Snippets
6.1. Object Detection
import asone
from asone import utils
from asone import ASOne
import cv2
video_path = 'data/sample_videos/test.mp4'
detector = ASOne(detector=asone.YOLOV7_PYTORCH, use_cuda=True) # Set use_cuda to False for cpu
filter_classes = ['car'] # Set to None to detect all classes
cap = cv2.VideoCapture(video_path)
while True:
_, frame = cap.read()
if not _:
break
dets, img_info = detector.detect(frame, filter_classes=filter_classes)
bbox_xyxy = dets[:, :4]
scores = dets[:, 4]
class_ids = dets[:, 5]
frame = utils.draw_boxes(frame, bbox_xyxy, class_ids=class_ids)
cv2.imshow('result', frame)
if cv2.waitKey(25) & 0xFF == ord('q'):
break
Run the asone/demo_detector.py to test detector.
# run on gpu
python -m asone.demo_detector data/sample_videos/test.mp4
# run on cpu
python -m asone.demo_detector data/sample_videos/test.mp4 --cpu
6.1.1 Use Custom Trained Weights for Detector
Use your custom weights of a detector model trained on custom data by simply providing path of the weights file.
import asone
from asone import utils
from asone import ASOne
import cv2
video_path = 'data/sample_videos/license_video.webm'
detector = ASOne(detector=asone.YOLOV7_PYTORCH, weights='data/custom_weights/yolov7_custom.pt', use_cuda=True) # Set use_cuda to False for cpu
class_names = ['license_plate'] # your custom classes list
cap = cv2.VideoCapture(video_path)
while True:
_, frame = cap.read()
if not _:
break
dets, img_info = detector.detect(frame)
bbox_xyxy = dets[:, :4]
scores = dets[:, 4]
class_ids = dets[:, 5]
frame = utils.draw_boxes(frame, bbox_xyxy, class_ids=class_ids, class_names=class_names) # simply pass custom classes list to write your classes on result video
cv2.imshow('result', frame)
if cv2.waitKey(25) & 0xFF == ord('q'):
break
6.1.2. Changing Detector Models
Change detector by simply changing detector flag. The flags are provided in benchmark tables.
Our library now supports YOLOv5, YOLOv7, and YOLOv8 on macOS.
# Change detector
detector = ASOne(detector=asone.YOLOX_S_PYTORCH, use_cuda=True)
# For macOs
# YOLO5
detector = ASOne(detector=asone.YOLOV5X_MLMODEL)
# YOLO7
detector = ASOne(detector=asone.YOLOV7_MLMODEL)
# YOLO8
detector = ASOne(detector=asone.YOLOV8L_MLMODEL)
6.2. Object Tracking
Use tracker on sample video.
import asone
from asone import ASOne
# Instantiate Asone object
detect = ASOne(tracker=asone.BYTETRACK, detector=asone.YOLOV7_PYTORCH, use_cuda=True) #set use_cuda=False to use cpu
filter_classes = ['person'] # set to None to track all classes
# ##############################################
# To track using video file
# ##############################################
# Get tracking function
track = detect.track_video('data/sample_videos/test.mp4', output_dir='data/results', save_result=True, display=True, filter_classes=filter_classes)
# Loop over track to retrieve outputs of each frame
for bbox_details, frame_details in track:
bbox_xyxy, ids, scores, class_ids = bbox_details
frame, frame_num, fps = frame_details
# Do anything with bboxes here
# ##############################################
# To track using webcam
# ##############################################
# Get tracking function
track = detect.track_webcam(cam_id=0, output_dir='data/results', save_result=True, display=True, filter_classes=filter_classes)
# Loop over track to retrieve outputs of each frame
for bbox_details, frame_details in track:
bbox_xyxy, ids, scores, class_ids = bbox_details
frame, frame_num, fps = frame_details
# Do anything with bboxes here
# ##############################################
# To track using web stream
# ##############################################
# Get tracking function
stream_url = 'rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mp4'
track = detect.track_stream(stream_url, output_dir='data/results', save_result=True, display=True, filter_classes=filter_classes)
# Loop over track to retrieve outputs of each frame
for bbox_details, frame_details in track:
bbox_xyxy, ids, scores, class_ids = bbox_details
frame, frame_num, fps = frame_details
# Do anything with bboxes here
[Note] Use can use custom weights for a detector model by simply providing path of the weights file. in ASOne class.
6.2.1 Changing Detector and Tracking Models
Change Tracker by simply changing the tracker flag.
The flags are provided in benchmark tables.
detect = ASOne(tracker=asone.BYTETRACK, detector=asone.YOLOV7_PYTORCH, use_cuda=True)
# Change tracker
detect = ASOne(tracker=asone.DEEPSORT, detector=asone.YOLOV7_PYTORCH, use_cuda=True)
# Change Detector
detect = ASOne(tracker=asone.DEEPSORT, detector=asone.YOLOX_S_PYTORCH, use_cuda=True)
Run the asone/demo_detector.py to test detector.
# run on gpu
python -m asone.demo_detector data/sample_videos/test.mp4
# run on cpu
python -m asone.demo_detector data/sample_videos/test.mp4 --cpu
6.3. Text Detection
Sample code to detect text on an image
# Detect and recognize text
import asone
from asone import utils
from asone import ASOne
import cv2
img_path = 'data/sample_imgs/sample_text.jpeg'
ocr = ASOne(detector=asone.CRAFT, recognizer=asone.EASYOCR, use_cuda=True) # Set use_cuda to False for cpu
img = cv2.imread(img_path)
results = ocr.detect_text(img)
img = utils.draw_text(img, results)
cv2.imwrite("data/results/results.jpg", img)
Use Tracker on Text
import asone
from asone import ASOne
# Instantiate Asone object
detect = ASOne(tracker=asone.DEEPSORT, detector=asone.CRAFT, recognizer=asone.EASYOCR, use_cuda=True) #set use_cuda=False to use cpu
# ##############################################
# To track using video file
# ##############################################
# Get tracking function
track = detect.track_video('data/sample_videos/GTA_5-Unique_License_Plate.mp4', output_dir='data/results', save_result=True, display=True)
# Loop over track to retrieve outputs of each frame
for bbox_details, frame_details in track:
bbox_xyxy, ids, scores, class_ids = bbox_details
frame, frame_num, fps = frame_details
# Do anything with bboxes here
Run the asone/demo_ocr.py to test ocr.
# run on gpu
python -m asone.demo_ocr data/sample_videos/GTA_5-Unique_License_Plate.mp4
# run on cpu
python -m asone.demo_ocr data/sample_videos/GTA_5-Unique_License_Plate.mp4 --cpu
6.4. Pose Estimation
Sample code to estimate pose on an image
# Pose Estimation
import asone
from asone import utils
from asone import PoseEstimator
import cv2
img_path = 'data/sample_imgs/test2.jpg'
pose_estimator = PoseEstimator(estimator_flag=asone.YOLOV8M_POSE, use_cuda=True) #set use_cuda=False to use cpu
img = cv2.imread(img_path)
kpts = pose_estimator.estimate_image(img)
img = utils.draw_kpts(img, kpts)
cv2.imwrite("data/results/results.jpg", img)
Now you can use Yolov8 and Yolov7-w6 for pose estimation. The flags are provided in benchmark tables.
# Pose Estimation on video
import asone
from asone import PoseEstimator
video_path = 'data/sample_videos/football1.mp4'
pose_estimator = PoseEstimator(estimator_flag=asone.YOLOV7_W6_POSE, use_cuda=True) #set use_cuda=False to use cpu
estimator = pose_estimator.estimate_video(video_path, save=True, display=True)
for kpts, frame_details in estimator:
frame, frame_num, fps = frame_details
print(frame_num)
# Do anything with kpts here
Run the asone/demo_pose_estimator.py to test Pose estimation.
# run on gpu
python -m asone.demo_pose_estimator data/sample_videos/football1.mp4
# run on cpu
python -m asone.demo_pose_estimator data/sample_videos/football1.mp4 --cpu
To setup ASOne using Docker follow instructions given in docker setup
ToDo
First Release
Import trained models
Simplify code even further
Updated for YOLOv8
OCR and Counting
OCSORT, StrongSORT, MoTPy
M1/2 Apple Silicon Compatibility
Pose Estimation YOLOv7/v8
YOLO-NAS
SAM Integration
Offered By:
Maintained By:
For personal and professional use. You cannot resell or redistribute these repositories in their original state.
There are no reviews.