AI on the Edge

Ishnu
4 min readMar 14, 2021

With the advancement of IoT and Edge Computing we can now move AI and ML closer to the Data

In recent years Cloud Computing has allowed us to find more efficient and useful methods of processing large amounts of data. Currently, we utilize cloud servers, i.e. Google or Amazon, to train artificial intelligence (AI ) and machine learning (ML) models and return predictions. While this is an efficient and fast option, an alternative to this is Edge AI.

Edge AI is where we can bring ML to where the data is being produced, running the algorithms locally on the machines collecting the data. This reduces bandwidth and latency issues, allowing for real-time processing and operations. This can also save power and data communication costs, as less data needs to be transmitted.

The concept of Edge computing is becoming popular, specifically as the Internet of Things (IoT) grows and Single board computers are becoming more powerful. The Raspberry Pi and NVIDIA Jetson Nano are just a few examples of powerful single-board computers. Users are looking for applications in ML and AI in more remote locations.

Advantages

Processing data away from the cloud servers, and instead closer to where the data is produced (on the edge of the system) can benefit many industries. Remote locations with no WiFi and limited LTE/ cellular coverage, such as remote Oil and Gas wells are an ideal scenario. For example, at these well sites, data is collected continuously and sent to the cloud where it can be run through models to predict equipment failures and schedule maintenance. Traditionally, data would be sent to the cloud by cellular, or satellite in extreme locations, however, by instead processing the data at the well, only predictions and alerts need to transmitted, saving bandwidth. This type of predictive maintenance is starting to be implemented on some Oil and Gas wells.

Sending raw data to the cloud has security implications as well. Many companies and government entities require their data to remain local. This can be accomplished by running the data through pre-trained models on local smaller-scale hardware.

Speed is another significant advantage of Edge computing. A great example of this is Self Driving Cars, which generate an enormous amount of data continuously and need to process that data on the spot to determine if they need to react. In situations like this, where a split-second decision is required, the latency involved in communicating with the cloud is not ideal.

The ultimate case of AI on the edge are the Mars Rovers. Communication between Earth and the rover takes 22 minutes for a round trip, it would be inefficient for the Rover to send data to Earth, wait for a response, and then respond. Instead, the rover that landed in 2021 was able to scan and analyze the ground as it was descending, determine an appropriate landing site and adjust it’s descent to land where it was safest.

Industrial Applications

The first step in deploying an AI model on Edge hardware is to collect the data. For remote Oil and Gas wells, installations require the Data Engineer to work with the local maintenance team to determine the equipment to be monitored and the data required and available. Once enough data has been acquired, a model can be trained. This trained model can then be deployed to a local machine monitoring all the sensors. If it detects an anomaly or predicts maintenance will be required sooner than scheduled, it can send out warnings. The Data stays off the cloud. The maintenance team can manually download all the data when they physically visit the site.

The same process of running a trained model on a local system can be used for any industrial applications where a company doesn’t want to risk it’s current production data.

Small-Scale Applications

Smaller-scale implementations are also common for less data-intensive applications, or increased portability. These are being integrated into our everyday life as IoT. Smart devices are showing up in every facet of our daily lives with internet connectivity and integration with could services such as Google, and Amazon.

The NVIDIA Jetson Nano is NVIDIA’s single board computer designed for AI development. It can run Tensorflow, and NVIDIA provides several libraries for deep learning, computer vision to name a few.

Image Credit: Benchmarking TensorFlow Lite on the New Raspberry Pi 4, Model B by Alasdair Allan

The Raspberry Pi can run python quite easily. It has many accessories to connect it to any type of sensors you could imagine. It can run Tensorflow Lite at decent speeds.

Espressif ESP-EYE

Google has even produced a Tensorflow Lite for Microcontrollers. This can run on some small inexpensive devices, such as Arduino Nano 33 BLE Sense or Adafruit Circuit Playground Bluefruit.

My personal favourite is the Espressif ESP-EYE, a 32-bit microcontroller with a camera, mic and WiFi, with this you can build a camera with person-detection. Or build a wild-life camera to detect and count different animals, to be deployed in a remote location where a satellite link might be the only way to transmit data.

These smaller devices are being used in AI machines

Summary

While there are still situations where Cloud computing is ideal, Edge computing is continuing to gain popularity as there is a growing need for local computing power. The advancement of smaller portable computing devices is spreading with many more options available for our AI needs. It will be interesting to see if, with continued advancements and improvements, Edge computing device become more prominent in our everyday lives.

--

--