I recently got my hands on the Intel Movidius Neural Compute Stick. It is an amazing piece of hardware that addresses the need for intelligence at the edge.The Movidius Neural Compute Stick(NCS) is a low-cost, form-factor developer kit for low-power vision based embedded inference applications. It enables you to develop low-power intelligent edge devices solutions for image processing using deep learning algorithms.
I would like to share a workaround for a very frustrating problem that’s not covered in the tutorial and that I came across trying to run the yolo_object_detection_app example which detects objects in a video stream captured by the webcam and marks the objects in the video.
I was not able to get the demo to run because I initially had installed OpenCV from the Debian source and this has no webcam support. Thus when you run the demo you’ll get the error below on the terminal
Since this version of OpenCV has no webcam support, it’s not possible to get a video stream for processing. You can verify this by running cv2.imshow() in the code and see the frame returns ‘None’.
To resolve this issue, first remove all the packages of the OpenCV installed from Debian repo. Do pip3 list | grep opencv and remove all the entries listed.
Once that is done, proceed to install OpenCV from this github source http://milq.github.io/install-opencv-ubuntu-debian/ by running the installation script bash install-opencv.sh and everything should now work fine once the installation is finished! Happy exploration with the Movidius NCS!
I had the pleasure of attending this year’s edition of the International Conference on Machine Learning which was held in Stockholm, Sweden. I was showcasing the Intel Smart Park solution which is an AI system that is meant to assist drivers in finding available parking slots in a parking lot. It is basically a 5-layer CNN which determines whether a slot is available or not and the candidates of available parking slots are passed through a graph algorithm to determine the one that is closest and most convenient for the driver. This is integrated to the car’s HUD and it will provide the driver with instructions to navigate to the assigned spot. You can see the demo displayed in the background.
In addition to this, I was able to attend a number of very intriguing presentations at the conference the most memorable being the presentation by Sanjeev Arora from Princeton titled Toward Theoretical Understanding Continue reading →
I have not had enough time to do a comprehensive update on all the recent developments around this project, here is a quick summary before delving into the technical implementation:
The project recently emerged the grand winner of the hardware trailblazer award at the American Society of Mechanical Engineers(ASME) global finals in New York. You can read more about this here. It competed against other impressive innovations from America, India and the rest of the world.
Below are a few images of the prototyping efforts so far(top being most recent version)
The current iteration is refined for portability and appears as shown below:
Currently, more than 30 million people in the world have speech impairments and thus to communicate have to use sign language resulting in a language barrier between sign language and non-sign language users. This project explores the development of a sign language to speech translation glove by implementing a Support Vector Machine(SVM) on the Intel Edison to recognize various letters signed by sign language users. The data for the predicted signed gesture is then transmitted to an Android application where it is vocalized.
The Intel Edison is the preferred board of choice for this project primarily because:
The huge and immediate processing needs of the project as Support Vector Machines are a machine learning algorithms that require a lot of processing power and memory. In addition to this we need our output in real-time.
The inbuilt Bluetooth module on the Edison is used to transmit the predicted gesture to the companion Android application for vocalization.
I have also published this project on the Intel Developer Zone site.
November 2017 was an exceptional month for our developer outreach activities in Africa! I had the chance to take the Intel AI Meetup Series to 6 cities across Africa.This was the first time we had our meetups on Artificial Intelligence and Machine Learning in these cities. It was great meeting the vibrant developer communities in the various developer hubs and delivering practical sessions on this very interesting and incipient topic. The sessions covered the following topics:
Machine Learning & Deep Learning Fundamentals
Machine Learning Applications in Real Life
Convolution Neural Networks (CNN) for Image Recognition
Recurrent Neural Networks for Speech Recognition
How Intel plans to help developers to improve performance of Machine Learning workloads
What frameworks are optimized for Intel Architecture and how you can get access to them
The Intel AI Portfolio
The Meetup Series begun in Lagos at CCHUB on the 16th of November where we trained 125 developers. We then proceeded to Cairo at KMT Continue reading →
Applications running on 2-in-1 tablets can either be in laptop mode or desktop mode when running. Note that the laptop mode and desktop mode are different from landscape and portrait modes.Laptop mode refers to users interacting with the application via touch input and gestures whereas desktop mode refers to users interacting with application via the keyboard and mouse. Applications need to be contextually aware when the device mode has changed and switch between the modes accordingly. Continue reading →
This project describes how to recognize certain types of human physical activities using acceleration data generated from the ADXL345 accelerometer connected to the Intel Edison board.
I have also published this project on the Intel Developer Zone site.
Human activity recognition has a wide range of applications especially in wearables. The data collected can be used monitoring patient activity, exercise patterns in athletes, fitness tracking and so on.
We will be using support vector machine learning algorithm to achieve this. The project is implemented using the LIBSVM library and done separately in both Python and node.js.
The set of activities to be recognized are running, walking, going up/down a flight of stairs and resting. We collect the accelerometer data over a set of intervals, extract the features which in this case are the acceleration values along the x,y and z axes. We then use this data for building a training model that will do the activity classification.
I came across one of the most interesting and humorous research papers while doing my nightly reads. The paper is titled Twelve Ways to Fool the Masses When Giving Performance Results on Parallel Computers by David H. Bailey and published in 1991. You can download the full paper here.
The title describes exactly what the paper is about and I’ll just share some interesting snippets from the document.
To quote in part the abstract: Many of us in the field of highly parallel scientific computing recognize that it is often quite difficult to match the run time performance of the best conventional supercomputers. But since lay persons usually don’t appreciate these difficulties and therefore don’t understand when we quote mediocre performance results, it is often necessary for us to adopt some advanced techniques in order to deflect attention from possibly unfavorable facts
I got to meet the President of the Republic of Kenya H.E Uhuru Kenyatta at the Nairobi Innovation Week and showcased to him the sign language to speech translation glove project.He is a jolly good fellow, very amiable and pleasantly conversable. It was a really great experience.
To top it all off, the project won the overall award for the best innovation at the Nairobi Innovation Week. Continue reading →