Title:
Real-time detection of traffic signs on mobile devices using deep learning

Thumbnail Image
Author(s)
El Bouzekraoui, Younes
Authors
Advisor(s)
Zhang, Xiuwei
James, Yi-Chang Tsai
Kira, Zsolt
Advisor(s)
Person
Editor(s)
Associated Organization(s)
Organizational Unit
Organizational Unit
Series
Supplementary to
Abstract
The rapid advancement in object detection (OD) models has enabled a multitude of applications in computer vision, yet the deployment on resource-constrained devices, such as smartphones and single-board computers like the Raspberry Pi or NVIDIA Jetson, remains challenging due to hardware limitations. This work proposes a methodology to address the issue by implementing quantization techniques on a YOLO (You Only Look Once) model specifically adapted for the detection of stop signs. The intention is to improve road safety by allowing low-end devices to recognize effectively and preemptively stop signs, thereby potentially reducing the likelihood of traffic-related incidents. We present a compressed YOLO model tailored for efficient operation on these devices, trained on a dataset comprised exclusively of stop sign images to optimize detection accuracy within this constrained context. We apply quantization to reduce the precision of the model’s parameters. The performance of the resulting model is rigorously evaluated in terms of its frames-per-second (FPS) and Mean Average Precision (mAP), metrics that balance operational efficiency with detection reliability. Our findings demonstrate that the model, once quantized, maintains a high mAP while achieving a significant improvement in FPS when compared to its uncompressed counterpart. These results not only reinforce the viability of deploying advanced object detection models on low-resource devices but also provide a framework for similar adaptations of YOLO models for various real-world applications where resource efficiency is required.
Sponsor
Date Issued
2023-12-13
Extent
Resource Type
Text
Resource Subtype
Thesis
Rights Statement
Rights URI