You only look once

CONCEPT

In this innovative project, I leverage the cutting-edge capabilities of YOLO5 (You Only Look Once, version 5) for robust hand gesture recognition. YOLO5 is a state-of-the-art real-time object detection system, and in this context, it serves as a powerful tool to accurately identify and analyze hand gestures.

DATA

To ensure optimal model performance, a meticulous process of custom data collection and labeling has been undertaken. A dataset specifically tailored for hand gesture recognition has been curated, with each instance meticulously labeled using bounding boxes. This curated dataset plays a pivotal role in training the YOLO5 model, enabling it to discern and interpret various hand gestures accurately.

PROCESS

  • Data Collection and Annotation: Images were acquired through a webcam, and subsequent annotation involved the application of bounding boxes to delineate specific regions of interest within the pictures.
  • Model Implementation: YOLO5 has been trained on custom images, achieving an average accuracy of 75%.

Github repository

Model Accuracy

Get In Touch!