DornerWorks

The DornerWorks team set out to create a smart vision system that could identify cars quickly and accurately using advanced AI technology. Just like how your smartphone can recognize faces in photos, we wanted to build a system that could spot vehicles in video feeds—but one that could work on compact, portable hardware without needing a connection to the cloud. This project focuses on getting a powerful object detection system called YOLOv2 to run efficiently on specialized hardware for use in applications like traffic monitoring, autonomous vehicles, or drone navigation.

Our Objectives

  • Implement a YOLOv2 Model on the NPU for efficient Object Detection of Cars
  • Validate Functionality and Performance for VEK280 Platform
  • Create a solution balancing speed and accuracy for edge applications

Platform & Architecture

Think of AMD’s VEK280 as a mini supercomputer built specifically for AI applications. This specialized hardware has dedicated processing units designed to run complex AI systems. We utilized an AI model called YOLOv2, a combination that gives us the power to process video in real time without needing massive hardware setups.

Design Flow

Creating this system involved several steps: 

  1. We prepared the AI model by training it to recognize cars using thousands of example images.
  2. We converted this model from its original format (Darknet) to one that works with our tools (TensorFlow). 
  3. We integrated everything with the VEK280 platform using tools that help optimize performance for this specific hardware.

Performance Metrics

The results exceeded our expectations. Our system can process 45 frames per second—fast enough for smooth video analysis in real-time applications. For comparison, movies typically run at 24 frames per second. What’s even more impressive is that we achieved this while only using 30% of the available AI processing power, leaving room for additional capabilities. The accuracy improved from 49.88% on standard hardware to 52.9% on our specialized setup. This means the system is both faster and more reliable at spotting vehicles in various conditions, which is crucial for applications where safety is involved.

The results: 

  • 45 FPS inference performance on NPU
  • Only 30% AIE utilization from the NPU
  • Improved accuracy from 49.88% (GPU) to 52.9% (NPU) mAP
  • Memory usage and execution time benchmarks

Future Applications

This technology opens doors to exciting possibilities. Imagine drones that can navigate busy areas while automatically avoiding obstacles, or traffic monitoring systems that operate independently in remote locations. These advancements could transform fields ranging from urban planning to emergency response by providing real-time visual intelligence in situations where traditional solutions fall short.

  • Integration with drone platforms
  • Real-time video processing pipeline
  • Object avoidance capabilities
  • Performance benchmarking in field conditions

Let’s Build the Future Together

Does your organization need help navigating complex obstacles with embedded technology? Count on our team to provide a solution that meets and exceeds your expectations. Get in touch today, and let’s start building the future together.

SCHEDULE A MEETING