$USD
  • EUR€
  • £GBP
  • $USD
TUTORIALS AI

How Beginners Can Quickly Run YOLOv8 for Real-Time Recognition on a Laptop

DFRobot Jun 18 2024 63

With the rapid advancement of artificial intelligence technology, significant progress has been made in the field of computer vision. Tasks such as object detection, image segmentation, and pose estimation have gained considerable attention. The YOLO (You Only Look Once) series of algorithms, as a prominent representative in the field of object detection, is widely popular for its high speed and accuracy. This article will provide a detailed guide on how to quickly run YOLOv8 on a computer to achieve detection, segmentation, and pose estimation. By reading this article, you will quickly learn how to install, configure, and run real-time recognition with YOLOv8 on a Windows system, laying a solid foundation for practical applications.

 

Setup Environment

  • PC: ThinkPad AMD
  • Operating System: Windows 11 x64
  • Software: Python 3.11
  • Environment: Anaconda

 

1. Download Anaconda3:

URL: https://www.anaconda.com/download/success

Download Anaconda3
 

2. Install Anaconda3

I chose the installation directory: D:\Anaconda\Anaconda3. Check the box for 'Add Anaconda3 to my PATH environment variable' to add Anaconda to the environment variables.

Install Anaconda3
 

3. Verify the installation

Please open PowerShell and enter the command conda -V.

If you don't see an error:

'conda' is not recognized as an internal or external command, operable program or batch file

It means the installation was successful. If you didn't add Anaconda to the environment variables during installation, you will need to add the Anaconda environment variables manually.

 

4. Create and activate a virtual environment

YOLOv8 can use the official library provided. The deployment steps for YOLOv10 are the same as steps 1-3 above. For detailed deployment instructions, please refer to: https://github.com/THU-MIG/yolov10.

 

Bash
#create new environment
conda create -n yolo8test python==3.8
#activate environment
conda activate yolo8test 
# Install the ultralytics package from PyPI
pip install ultralytics

Performance Comparison of Different Model

The official website offers YOLOv8 models in various sizes. The comparison of parameters is shown in the table below:

ModelSizemAPval 50-95Inference Time CPU/NNX (ms)Inference Time A100 TensorRT (ms)Params (M)FLOPs (B)
YOLOv8n64037.380.40.993.28.7
YOLOv8s64044.9128.41.211.228.6
YOLOv8m64050.2234.71.8325.978.9
YOLOv8l64052.9375.22.3943.7165.2
YOLOv8x64053.9479.13.5368.2257.8

Parameter Explanation:

Size (pixels)Resolution of the input image.
mAPval 50-95Average Precision at multiple IoU thresholds, used to evaluate the overall performance of the model in pose estimation tasks.
CPU ONNX Speed (ms)Inference speed when running on CPU using ONNX format.
A100 TensorRT Speed (ms)Inference speed when optimized using TensorRT on NVIDIA A100 GPU.
params (M)The number of parameters in the model, where more parameters usually indicate a more complex model.
FLOPs(B)Reflects the computational complexity of the model.

Taking the YOLOv8 segment object segmentation model as an example, these models vary in complexity, number of parameters, computational requirements, accuracy, and speed, making them suitable for different application scenarios.

YOLOv8n-segCharacteristics: The lightest model in this series with the fewest parameters and computational requirements.
Suitable Scenarios: Suitable for resource-constrained environments such as low-power devices or real-time applications where speed is more important than accuracy.
Considerations: Sacrifices some accuracy, especially in more complex scenarios.
YOLOv8s-segCharacteristics: Slightly more complex than YOLOv8n-seg, offering better performance and accuracy.
Suitable Scenarios: Suitable for applications requiring higher accuracy but still focusing on speed, such as medium-scale monitoring systems.
Considerations: Provides better detection and segmentation results while maintaining a relatively fast processing speed.
YOLOv8m-segCharacteristics: Medium-complexity model offering higher accuracy.
Suitable Scenarios: Suitable for scenarios requiring higher accuracy and adequate computational resources, such as advanced monitoring systems or automated detection systems.
Considerations: Requires more computational resources, and relatively slower processing speed.
YOLOv8l-segCharacteristics: Larger model providing high-precision detection and segmentation.
Suitable Scenarios: Suitable for high-end applications such as advanced research projects or complex image processing tasks.
Considerations: Requires significant computational resources, slower processing speed, not suitable for real-time applications.
YOLOv8x-segCharacteristics: The most complex and accurate model in the series.
Suitable Scenarios: Suitable for scenarios requiring extremely high accuracy, such as professional image analysis and research.
Considerations: Extremely high demand for computational resources, requires professional-grade hardware support.

Object Detection

code:

Python
import cv2
import time
import torch
from ultralytics import YOLO

# load YOLOv8n
model = YOLO('yolov8n.pt')
cap = cv2.VideoCapture(0)

if not cap.isOpened():
    print("error")
    exit()

width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))

prev_time = 0
fps = 0
while True:

    ret, frame = cap.read()
    
    if not ret:
        print("error")
        break

    results = model(frame, device='cpu')

    for result in results:
        boxes = result.boxes.xyxy.cpu().numpy()
        confidences = result.boxes.conf.cpu().numpy()
        class_ids = result.boxes.cls.cpu().numpy().astype(int)
        for i in range(len(boxes)):
            box = boxes[i]
            x1, y1, x2, y2 = map(int, box[:4])
            confidence = confidences[i]
            class_id = class_ids[i]
            label = result.names[class_id]
            cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 255, 0), 2)
            cv2.putText(frame, f'{label} {confidence:.2f}', (x1, y1 - 10), 
                        cv2.FONT_HERSHEY_SIMPLEX, 0.9, (36, 255, 12), 2)

    curr_time = time.time()
    fps = 1 / (curr_time - prev_time)
    prev_time = curr_time

    cv2.putText(frame, f'FPS: {fps:.2f}', (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
    cv2.imshow('YOLOv8n Real-time', frame)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()

Save the code as a Python file named 'yolov8ndet.py' and run it.

Bash
python yolov8ndet.py

Test the performance using the YOLOv8n model: Running speed is 7.49.

Test the performance using the YOLOv8n model Running speed is 7.49
 

Segment

code:

Python
import cv2
import time
# import torch
from ultralytics import YOLO
from PIL import Image
import numpy as np

model = YOLO('yolov8n-seg.pt')
cap = cv2.VideoCapture(0)

if not cap.isOpened():
    print("error")
    exit()

width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))


prev_time = 0
fps = 0

while True:

    ret, frame = cap.read()
    if not ret:
        print("error")
        break
    results = model(frame, device='cpu')
    for result in results:

        pil_image = Image.fromarray(results[0].plot()[:, :, ::-1])
        frame = np.array(pil_image)

        boxes = result.boxes.xyxy.cpu().numpy()
        confidences = result.boxes.conf.cpu().numpy()
        class_ids = result.boxes.cls.cpu().numpy().astype(int)

        for i in range(len(boxes)):
            box = boxes[i]
            x1, y1, x2, y2 = map(int, box[:4])
            confidence = confidences[i]
            class_id = class_ids[i]
            label = result.names[class_id]

    curr_time = time.time()
    fps = 1 / (curr_time - prev_time)
    prev_time = curr_time

    cv2.putText(frame, f'FPS: {fps:.2f}', (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
    cv2.imshow('YOLOv8n Real-time', frame)
   

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()

Save the code as a Python file named 'yolov8nseg.py' and run it.

Bash
python yolov8nseg.py

Test the performance using the YOLOv8n model: Running speed is 5.07.

Test the performance using the YOLOv8n model Running speed is 5.07
 

 

Pose Estimation

code:

Python
import cv2
import time
from ultralytics import YOLO
from PIL import Image
import numpy as np

model = YOLO('yolov8n-pose.pt')
cap = cv2.VideoCapture(0)

width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))

prev_time = 0
fps = 0
while True:

    ret, frame = cap.read()
    if not ret:
        print("error")
        break


    results = model(frame, device='cpu')

    for result in results:
        pil_image = Image.fromarray(results[0].plot())
        frame = np.array(pil_image)
        boxes = result.boxes.xyxy.cpu().numpy()
        confidences = result.boxes.conf.cpu().numpy()
        class_ids = result.boxes.cls.cpu().numpy().astype(int)

        for i in range(len(boxes)):
            box = boxes[i]
            x1, y1, x2, y2 = map(int, box[:4])
            confidence = confidences[i]
            class_id = class_ids[i]
            label = result.names[class_id]
    curr_time = time.time()
    fps = 1 / (curr_time - prev_time)
    prev_time = curr_time
    frame = cv2.resize(frame, (640, 480))  

    cv2.putText(frame, f'FPS: {fps:.2f}', (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
    cv2.imshow('YOLOv8n Real-time', frame)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()

Save the code as a Python file named 'yolov8npos.py' and run it.

Bash
python yolov8npos.py

Test the performance using the YOLOv8n model: Running speed is 6.87.

Test the performance using the YOLOv8n model Running speed is 6.87
 

Summary

This article provides a detailed guide on how to quickly run YOLOv8 on your computer, covering functionalities such as detection, segmentation, oriented bounding boxes (OBB), classification, tracking, and pose estimation. By explaining the installation, configuration, and usage of YOLOv8, it is believed that you have mastered the basic operations of this powerful tool. In practical applications, YOLOv8 will provide you with efficient and accurate computer vision experiences, helping you achieve better results in the field of artificial intelligence. With ongoing technological advancements, YOLOv8 is expected to play a greater role in the future, bringing more innovations and breakthroughs to the field of computer vision.

If this article proves beneficial to you, we invite you to stay tuned for more updates from us. Coming up next, we'll delve into the testing of YOLOv10, and we eagerly anticipate your anticipation.

REVIEW