07/08/2023

TinyML means machine learning (ML) on tiny, low powered, low cost computers, giving them the capability to perform on-device (‘on-board’) analytics of vision, audio and speech.

TinyML would upend the current architecture Internet of Things (IoT) supporting the ‘swarms’ of relatively ‘dumb’ sensor devices being embedded in our daily lives. We would go from a ‘data enabled’ physical environment to a ‘smart’ physical environment – a ‘fridge that can think’ lurking in your kitchen!

Current IoT systems

‘Traditional’ IoT systems utilise large fleets of edge devices deployed in the physical environment, such as soil sensors, to gather data which is transmitted back to a cloud-based CPU (now AI-enabled) to process: IoT devices are literally the ‘eyes, ears and touch’ of AI in the physical world.

In this traditional IOT architecture, the IoT edge devices – because they need to be low cost, robust and long-lived –have low computing power and memory, and hence low power requirements (and demands on demand life). For example, the majority of IoT edge devices operate at clock speeds (the higher the clock speed, the more processing power) between 10–1000 MHz, which will not support complex learning models at the edge and less than 1 MB onboard flash memory.

This traditional IoT architecture has its drawbacks:

  • it depends on consistent and good quality connectivity between the fleet of IoT edge devices and the cloud;
  • as data is centralised, it exacerbates data privacy concerns and security risks;
  • it requires substantial computing and powering resources in the cloud to analyse and store the data; and
  • involves delay in the cycle between the IoT edge device, the cloud and the return instructions, which could have disastrous consequences, such as urgent adjustments in the flow rate of drugs into a patient.

Hence, the efforts to integrate ML capabilities into the IoT edge devices themselves: ‘intelligence’ onboard the sensor itself. One commentator compares CPU-based ML under the traditional IOT architecture and TinyML at the network edge as follows:

“As per the battery life, TinyML outperforms ordinary ML techniques as the models can run on embedded devices. Cost efficiency is better in TinyML as only one microcontroller is required compared to a PC. Scalability is higher in ordinary ML applications as more computing power is available. Robustness is higher on TinyML deployments as in the case when a node is removed, all information remains intact while on the ML case is server-side based. The deployment is better on ML models as there are many paradigms available online and more widely used. The performance metric is higher on ML cases as TinyML technology has emerged and there are not much models. Lastly, security is higher on TinyML deployments as the information remains within the embedded device and there are no exchanges between third parties.”

TinyML would not necessarily operate as a substitute for cloud-based services – but would be part of a more decentralised, and robust, ML system. For example, a body sensor may have enough ML capability to work out whether it can diagnosis and solve a problem from patient data it collects or whether the issue ‘is above my pay grade’ and escalate to the cloud-based AI.

Potential uses of TinyML

First, patchy mobile coverage is holding back the deployment of digital agriculture. PlantVillage, an open-source project managed by Penn State University, has created Nuru, an artificial intelligence-based program that can function on mobile phones without internet connectivity and is deploying it in Africa to help farmers identify and respond to hazards for cassava crops, which is a key food source for hundreds of millions.

Second, data processing ‘on-board’ IoT devices will enable uses which otherwise would be impractical, if not dangerous, because of the delay (‘latency’) if the IoT device had to transmit the data, the cloud process the answer and transmit it back to the IoT device. Use cases under development include:

  • current hearing aid devices amplify all of the sounds in the user’s surrounding space, which can make it difficult for a person to distinguish the key sounds, such as speech, in a noisy environment. TinyML could provide a solution to this problem by using speech enhancement algorithm embedded in the hearing aid device, which performs the ‘de-noising’ operation across the input sounds and extracts the speech signal.
  • a sign language translation device mounted in a watch could detect American Sign Language, converting the images of the hand signs into text displayed on the screen of the watch. The device uses ARM Cortex-M7 microcontroller with only 496 KB of frame-buffer RAM. While there are challenges to be resolved in detecting new cases and operating in visually complex environments, the current device achieved 74.59% accuracy in the real world.
  • a gesture recognition device that could be attached to an existing cane to be used by the visually impaired.

Third, TinyML will allow ‘close-at-hand’ monitoring and maintenance work on in-field equipment. Ping, an Australian company, has developed a TinyML device that continuously monitors the acoustic signature of wind turbine blades to detect and notify any change or damage, using advanced acoustic analysis.

Lastly, TinyML can be inserted into manufacturing processes to manage and adjust machinery. Perhaps less consequentially for humanity, TinyML can result in better roasted coffee. It is critical to identify the ‘first crack’ in any beans since the time spent roasting after the first crack has a major impact on the quality and flavour of the processed beans. Two Norwegian businesses, Roest and Soundsensing, have added a microcontroller with TinyML in their bean roasting equipment to more quickly identify that first crack.

Conquering the challenges of TinyML

The fundamental problem is that ML/AI and IoT device design have been heading in the opposite directions – algorithms have become depend on vastly increasing data inputs and IoT devices have been designed to consume ever lower levels of energy (and therefore reinforcing their limited computing capacity). As one commentator has said:

“Let’s take an example to understand this better. GPT-3 has 175 billion parameters and got trained on 570 GB of text. On the other hand, we have Google Assistant that can detect speech with a model that’s only 14 KB. So, that can fit on something as small as a microprocessor, but GPT-3 cannot.”

TinyML depends on the intersection of three trends:

  • low power innovations for embedded devices;
  • increases in microprocessor capabilities; and
  • optimizing ML algorithms to make them less resource intensive.

On the first trend, last year we discussed the development of IoT edge devices which could supplement or replace battery power through two techniques which harvested energy ‘from thin air’, as it were, by the following techniques:

  • RF energy harvesting: transmitters, such as mobile base stations, use radiated power in propagating electromagnetic waves. RF energy harvesting converts radio frequency energy into direct current (RF-DC). The energy can be stored in a unit such as a capacitor or it can be directly used to drive sensors, logic circuits and digital chips.
  • Backscattering communication: this allows an IoT device to transmit data by reflecting and modulating an incident RF wave (i.e. transmitted by another source which the IoT device picks up). In a way, backscattering communication is similar to radar technology, a part of electromagnetic waves will be reflected when they reach the surface of an object, but the difference being that the IoT device will reshape or ‘modulate’ the reflected transmission to carry the data the IoT device has collected (essentially, the outbound transmission ‘piggy backs’ on the reflection of the inbound transmission).

On the second trend, there have steady, large gains in the performance of typical microcontroller processors. In 2004, Arm introduced the Cortex-M 32-bit processor family which helped create a powerful new generation of low-cost microcontrollers. The Cortex-M4 processor introduced hardware floating point support and the ability to perform multiple integer calculations, which has made it easier to perform complex calculations on microcontrollers, which is essential for machine learning algorithms. A recent development is the introduction of the Ethos Neural Processing Unit (NPU), which allows algorithms to run small microcontrollers with around a 480 times performance boost.

The third development, ‘scaling down’ algorithms to operate within the power and memory constraints of IoT edge devices, is where most of the focus of TinyML research currently is. There are a number of different approaches.

First, ‘federated learning’ involves creating an ML model from decentralized data and in a distributed way, as follows:

“a variety of edge devices collaborate so as to build a global model using only local copies of the data and then each device downloads a copy of the model and updates the local parameters. Finally, the central server aggregates all model updates and proceeds with the training and evaluation without exchanging data to other parties.”

Second, ‘transfer learning’ is where a machine learning model developed for one task is reused as the starting point for a model on a second task. By drawing on the pre-trained or existing ‘learned experience’, the ML model can be adapted to the new task with relatively less data, training effort an therefore computing power. A leading transfer learning tool is TensorFlow Lite for Microcontrollers, Google’s open source program, which it describes as follows:

“The intuition behind transfer learning for image classification is that if a model is trained on a large and general enough dataset, this model will effectively serve as a generic model of the visual world. You can then take advantage of these learned feature maps without having to start from scratch by training a large model on a large dataset.”

Models in the TensorFlow Lite framework can use just a few lines of code to be adapted to a new task, and then can be deployed onboard IoT edge devices.

Conclusion

TinyML could supercharge IoT, allowing IoT devices to take intelligent decisions in the field, literally. However, there is still the challenge of squeezing machine intelligence on the head of a pin, literally.

Industry standards – as in IoT generally – are desperately needed to reign in the often chaotic heterogeneity of software, devices, and power requirements. An industry association has been set up to facilitate the emergence of standards

Read more: TinyML: Tools, Applications, Challenges, and Future Research Directions

""