Leveraging AI at the edge is challenging at best. Part of this is due to connectivity, some to environmental concerns, and yet other aspects related to the available hardware. We see many IoT solutions trying to push AI all the way to the actual sensors, or all the way into the cloud. We think there is a better solution.
The current landscape
At the end of the day, hardware is what drives things. It is the interface between software and the real world, be it networks, robots, or sensors. And of course those interfaces can’t exist without CPUs, memory, and persistent storage to support operating systems, embedded software, and/or or AI models.
Several SoC and embedded system designs have been adding new AI-enabled processing capabilities to their solutions in the form of linear algebra accelerators (e.g. tensor processing units, or TPUs), which has kicked off an arms race to drive AI all the way to the edge where inference (and sometimes even training) takes place directly on the sensing device. This drives AI all the way to the device, reducing connectivity requirements, improving security, and reducing latency for affected systems. However, context is often lost for the inference being made on-device, which can cause upstream systems to suffer from generation loss. The increase is capabilities often results in increased cost. While this is an incredible technical feat, we don’t feel it’s the endgame in leveraging AI in IoT environments.
On the other hand, some vendors are focusing on purely cloud-based solutions. They can offer nearly unlimited storage, compute power, and connectivity with high availability (HA) and higher level abstractions of the monitored systems. These vendors want to manage a fleet of IoT devices, pushing code and pulling data as needed and adding support for complex MLOps pipelines to their offering. However, there is a very real risk of vendor lock-in once a solution has been crafted using their devices, their networks, and their cloud for storage and compute. Add to that the requirement that all devices must be connected at all times, which adds networking complexity, cost, and security concerns. While this might work for some use cases involving retail electronics, we also don’t see this as the final solution for IoT challenges either.
Where things are headed
At Zaggy AI, we feel there is a happy middle ground that enhances many of the pros, and reduces many of the cons, of the current camps of AI at the edge. Our philosophy is that hardware matters: where it is, how capable it is, and at what cost. There is no one-size-fits-all solution when it comes to an organization’s requirements for edge AI, and as such we develop solutions that can run on the beefiest custom-designed lab research boxes all the way down to the smallest Raspberry Pico – but always one level up from the sensors themselves.
By developing hierarchical AI solutions for sensor fusion, we are able to retain context at the levels in your systems that matter, without forcing your systems to conform to a given architecture. This provides you with the flexibility to select sensors and hardware that makes sense for your organization, and leverage AI where it makes the most sense. It also precludes unnecessary connectivity, reduces latency, and improves security. At the end of the day, it gives you what you need, where you need it, at a cost that makes sense for your requirements.