Post Thumbnail

Neural Compute Stick (NCS) for Neural Networks in Deep Learning

By the team at Turnkey Tech, Oct 16, 2019

This post is for readers who have implemented Deep Learning models. After reading this article, the reader will know when and how to use NCS for inferencing of Deep Learning models.

Once the user trained his DL model and the model is good enough to work on unseen data, we move it to deployment. DL model’s deployment needs real-time inferencing or predictions in most cases, hence speed is a major concern.

For real-time inferencing, we have two options:

  • Model is situated in the remote machine; data is collected locally and sent over the network, remote machine predict results and those results are fetched back over the network to the local machine.
  • Model, data and computation will be local.

In the first method, time complexity will be dependent on Network speed as remote computation machine can be as big as we can afford.

In the later one, predictions time will depend on local processor speed. When the inferencing is local, power and size is a major concern. Hence we need a compact and power efficient processing unit.

What is NCS?

NCS is a USB plug-in device which is used for inferencing neural networks. Its processing unit is called the VPU (Vision Processing Unit). NCS stick is power efficient device which we can plug into Desktop or Raspberry Pi and start real-time inferencing.

Why should you use NCS?

  • NCS is power efficient and highly computing device as it performs 4 Trillion operations per second with a maximum power consumption of 1 Watt. The above data is for the latest NCS, which is Intel Neural Compute Stick 2 (NCS2)
  • There are chances of the data leak if your data gets transported over the network, NCS processes the data locally and keeps the data safe.
  • NCS can also be plugged into Raspberry Pi and the complete system is space efficient.

Limitations of NCS:

  • NCS is power efficient and highly computing device as it performs 4 Trillion operations per second with a maximum power consumption of 1 Watt. The above data is for the latest NCS, which is Intel Neural Compute Stick 2 (NCS2)
  • As the name Vision Processing Units, it only works on visual data and not textual data.

Prerequisites for using Intel NCS2:

  • An x86_64 computer with the Ubuntu* 16.04 operating system (desktop edition) installed.
  • Intel® Neural Compute Stick 2 (Intel® NCS 2)
  • An internet connection to download and install the Intel® Distribution of OpenVINO™ toolkit.

Installation steps for NCS2:

Download the Intel® Distribution of OpenVINO™ toolkit, and then return to this page.

Run following commands in terminal to install Openvino toolkit.

cd ~/Downloads
tar xvf l_openvino_toolkit_[VERSION].tgz
cd l_openvino_toolkit_[VERSION]
./install_cv_sdk_dependencies.sh
./install_GUI.sh

Run the code to configure the Neural Compute Stick USB Driver.

cd ~/Downloads
cat [[EOF ] 97-usbboot.rules SUBSYSTEM==”usb”, ATTRS{idProduct}==”2150", ATTRS{idVendor}==”03e7", GROUP=”users”, MODE=”0666", ENV{ID_MM_DEVICE_IGNORE}=”1" SUBSYSTEM==”usb”, ATTRS{idProduct}==”2485", ATTRS{idVendor}==”03e7", GROUP=”users”, MODE=”0666", ENV{ID_MM_DEVICE_IGNORE}=”1" SUBSYSTEM==”usb”, ATTRS{idProduct}==”f63b”, ATTRS{idVendor}==”03e7",GROUP=”users”, MODE=”0666", ENV{ID_MM_DEVICE_IGNORE}=”1" EOF
sudo cp 97-usbboot.rules /etc/udev/rules.d/
sudo udevadm control — reload-rules
sudo udevadm trigger
sudo ldconfig
rm 97-usbboot.rules

The code to install Tensorflow, Numpy etc.

sudo ./install_prerequisites.sh

The code for testing a successful installation.

cd /opt/intel/computer_vision_sdk/deployment_tools/demo
sudo ./demo_squeezenet_download_convert_run.sh -d MYRIAD

Plug the Neural Compute Stick into the USB port on your computer, and then run the following commands in a new terminal window:

Inference from the Tensorflow model on NCS2:

Tensorflow model contains 2 files (.pb + .config) in which the previous one contains the network along with weights and the later one contain hyperparameters.

Download sample Tensorflow model.

cd ~/Downloads
wget http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_2018_01_28.tar.gz
tar xvf ssd_mobilenet_v1_coco_2018_01_28.tar.gz

Convert Tensorflow model to IR (.xml + .bin).

cd /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer
python3 mo_tf.py --input_model \ ~/Downloads/ssd_mobilenet_v1_coco_2018_01_28/frozen_inference_graph.pb --tensorflow_use_custom_operations_config \
/opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/ssd_support.json --tensorflow_object_detection_api_pipeline_config \ ~/Downloads/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config --data_type FP16

Activate Openvino Environment.

source /opt/intel/computer_vision_sdk/bin/setupvars.sh

Inference on NCS2.

cd /opt/intel/computer_vision_sdk/deployment_tools/inference_engine/samples/python_samples
python3 object_detection_demo_ssd_async.py -m /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/frozen_inference_graph.xml -i cam -pt 0.6 -d MYRIAD

Docker Containers For..

    Prev Post

Connect PC To Phone..

Next Post