Resolving OpenCV issues to run Tiny YOLO on the Movidius Neural Compute Stick

I recently got my hands on the Intel Movidius Neural Compute Stick. It is an amazing piece of hardware that addresses the need for intelligence at the edge.The Movidius Neural Compute Stick(NCS) is a low-cost, form-factor developer kit for low-power vision based embedded inference applications. It enables you to develop low-power intelligent edge devices solutions for image processing using deep learning algorithms.

Picture1I came across a great tutorial on getting started with the Movidius NCS and Tiny YOLO on this blog post 

I would like to share a workaround for a very frustrating problem that’s not covered in the tutorial and that  I came across trying to run the yolo_object_detection_app example which detects objects in a video stream captured by the webcam and marks the objects in the video.

I was not able to get the demo to run because I initially had installed OpenCV from the Debian source and this has no webcam support. Thus when you run the demo you’ll get the error below on the terminal


Since this version of OpenCV has no webcam support, it’s not possible to get a video stream for processing. You can verify this by running cv2.imshow()  in the code and see the frame returns ‘None’.

To resolve this issue, first remove all the packages of the OpenCV installed from Debian repo. Do pip3 list | grep opencv and remove all the entries listed.

Once that is done, proceed to install OpenCV from this github source    by running the installation script bash and everything should now work fine once the installation is finished! Happy exploration with the Movidius NCS!


A Support Vector Machine Implementation for Sign Language Recognition on x86

Quick Update:

I have not had enough time to do a comprehensive update on all the recent developments around this project, here is a quick summary before delving into the technical implementation:

  • The project recently emerged the grand winner of the hardware trailblazer award at the American Society of Mechanical Engineers(ASME) global finals in New York. You can read more about this here. It competed against other impressive innovations from America, India and the rest of the world.
  • Sign-IO also placed 2nd runners-up at the Royal Academy of Engineering Leaders in Innovation Fellowship in London.
  • Below are a few images of the prototyping efforts so far(top being most recent version)

The current iteration is refined for portability and appears as shown below:

Pretty neat, right?
Pretty neat, right?
The previous version

Currently, more than 30 million people in the world have speech impairments and thus to communicate have to use sign language resulting in a language barrier between sign language and non-sign language users. This project explores the development of a sign language to speech translation glove by implementing a Support Vector Machine(SVM) on the Intel Edison to recognize various letters signed by sign language users. The data for the predicted signed gesture is then transmitted to an Android application where it is vocalized.

The Intel Edison is the preferred board of choice for this project primarily because:

  1. The huge and immediate processing needs of the project as Support Vector Machines are a machine learning algorithms that require a lot of processing power and memory. In addition to this we need our output in real-time.
  2.  The inbuilt Bluetooth module on the Edison is used to transmit the predicted gesture to the companion Android application for vocalization.

I have also published this project on the Intel Developer Zone site.

The project code can be downloaded from here

Below is the short project video demo from a while back:

1. The Hardware Design

The sign language glove has five flex sensors mounted on each finger to quantify how much a finger is bent.

Continue reading

Intel AI Meetups in Africa- A first!

Lagos Meetup

November 2017 was an exceptional month for our developer outreach activities in Africa! I had the chance to take the Intel AI Meetup Series to 6 cities across Africa.This was the first time we had our meetups on Artificial Intelligence and Machine Learning in these cities. It was great meeting the vibrant developer communities in the various developer hubs and delivering practical sessions on this very interesting and incipient topic. The sessions  covered the following topics:

  • Machine Learning & Deep Learning Fundamentals
  • Machine Learning Applications in Real Life
  • Convolution Neural Networks (CNN) for Image Recognition
  • Recurrent Neural Networks for Speech Recognition
  • How Intel plans to help developers to improve performance of Machine Learning workloads
  • What frameworks are optimized for Intel Architecture and how you can get access to them
  • The Intel AI Portfolio

The Meetup Series begun in Lagos at CCHUB on the 16th of November where we trained 125 developers. We then proceeded to Cairo at KMT Continue reading

Creating a context aware application for 2-in-1 devices

Applications running on 2-in-1 tablets can either be in laptop mode or desktop mode when running. Note that the laptop mode and desktop mode are different from landscape and portrait modes.Laptop mode refers to users interacting with the application via touch input and gestures whereas desktop mode refers to users interacting with application via the keyboard and mouse. Applications need to be contextually aware when the device mode has changed and switch between the modes accordingly. Continue reading

Predicting user activity using an accelerometer on Intel based wearables

This project describes how to recognize certain types of human physical activities using acceleration data generated from the ADXL345 accelerometer connected to the Intel Edison board.

I have also published this project on the Intel Developer Zone site.

Human activity recognition has a wide range of applications especially in wearables. The data collected can be used monitoring patient activity, exercise patterns in athletes, fitness tracking and so on.

We will be using support vector machine learning algorithm to achieve this. The project is implemented using the LIBSVM library and done separately in both Python and node.js.

The set of activities to be recognized are running, walking, going up/down a flight of stairs and resting. We collect the accelerometer data over a set of intervals, extract the features which in this case are the acceleration values along the x,y and z axes. We then use this data for building a training model that will do the activity classification.

The project code can be downloaded from here

Hardware Implementation.

The diagram below shows the pin connection for the ADXL345 to the Intel® Edison board.


Part 1. Python Implementation.

Setting up LIBSVM

Download the LIBSVM library and transfer the  LibSVM  zipped folder to the Intel® Edison board root directory using WINSCP. Then extract it by running:

tar –xzf libsvm-3.21.tar.gz

Run  make in libsvm-3.21 directory

Continue reading

Twelve Ways to Fool the Masses when giving Perfomance results on Parallel Computers

I came across one of the most interesting and humorous research papers  while doing my nightly reads. The paper is titled Twelve Ways to Fool the Masses When Giving Performance Results on Parallel Computers by David H. Bailey and published in 1991. You can download the full paper  here.
The title describes exactly what the paper is about and I’ll just share some interesting snippets from the document.

To quote in part the abstract:
Many of us in the field of highly parallel scientific computing recognize that it is often quite difficult to match the run time performance of the best conventional supercomputers.  But since lay persons usually don’t appreciate these difficulties and therefore don’t understand when we quote mediocre performance results, it is often necessary for us to adopt some advanced techniques in order to deflect attention from possibly unfavorable facts

Continue reading