A good read from Stephanie Overby at The Entrepriseres Project
How do edge computing and artificial intelligence (AI) work together? Why does edge fit well with AI? What are some use cases? Let’s examine what IT leaders should know
For decades, artificial intelligence (AI) lived in data centers, where there was sufficient compute power to perform processor-demanding cognitive tasks. In time, AI made its way into software, where predictive algorithms changed the nature of how these systems support the business. Now AI has moved to the outer edges of networks.
“Put another way, edge computing brings the data and the compute closest to the point of interaction,” says Red Hat chief technology strategist E.G. Nadhan. Edge AI is a very real (and rapidly expanding) phenomenon, powering everything from smartphones and smart speakers to automotive sensors and security cameras.AI is “the most common workload” in edge computing.
In fact, says Dave McCarthy, research director within IDC’s worldwide infrastructure practice focusing on edge strategies, AI is “the most common workload” in edge computing. “As IoT implementations have matured,” he adds, “there has been an increased interest in applying AI at the point of generation for real-time event detection.”
Deloitte predicts that more than 750 million edge AI chips (specifically designed to perform or accelerate on-device machine learning) will be sold this year, with the enterprise market growing faster than its consumer counterpart at a compound annual growth rate of 50 percent over the next four years.
Enterprises will spend an average of 30 percent of their IT budgets on edge cloud computing over the next three years, according to “Strategies for Success at the Edge, 2019,” a report by Analysys Mason.
[ Why does edge computing matter to IT leaders – and what’s next? Learn more about Red Hat’s point of view. ]
As IT leaders consider where edge AI might fit into their own enterprise technology roadmap, here are some things we know now:
1. It’s important to begin at the beginning
If you haven’t already implemented an edge solution, you can’t leapfrog to edge AI. “The first step for most IT leaders today is in constructing a solution architecture that leverages edge computing in conjunction with a cloud backend,” says Seth Robinson, senior director of technology analysis, at CompTIA. “Moving forward, integrating AI will be a critical step in managing the scale of edge solutions and building competitive advantage.”
2. Edge AI can address limitations of cloud-based AI
Every time you ask Siri or Alexa or Google a question, for example, your voice recording is sent to an edge network.
Latency, security, cost, bandwidth, and privacy are some of the issues associated with machine- or deep-learning tasks that edge AI – closer to data sources – can mitigate. Every time you ask Siri or Alexa or Google a question, for example, your voice recording is sent to an edge network where Google, Apple, or Amazon uses AI to translate voice to text to enable a command processor to generate an answer.
Without the edge, waiting seconds for a response would be commonplace. “The edge network allows for a pleasant user experience within the Doherty Threshold (less than 400 milliseconds),” says Stephen Blum, CTO and co-founder at PubNub. “Google, Apple, and Amazon have spent millions investing in their edge so that their AI can answer you quickly. To compete with the giants, the business needs to invest in edge AI.”
[ Get a shareable primer: How to explain edge computing in plain English.]
3. Only a portion of AI workflow happens at edge today
The AI models themselves are usually trained in a central data center or cloud infrastructure using historical data sets.
“AI edge processing today is focused on moving the inference part of the AI workflow to the device,” Omdia analysts explain in their Artificial Intelligence for Edge Devices report. The AI models themselves are usually trained in a central data center or cloud infrastructure using historical data sets, IDC’s McCarthy explains. Then those AI models can be deployed to the edge for local inferencing against current data.
“Essentially, companies can train in one environment and execute in another,” says Mann of SAS. “The vast volumes of data and compute power required to train machine learning is a perfect fit for cloud, while inference or running the trained models on new data is a perfect fit for execution at the edge.” Model compression techniques that “enable squeezing large AI models into small hardware form factors” could push some training to the edge over time, notes Omdia.
[ Read also: How big data and AI work together. ]
4. Real-time learning at the edge will take time
“Real-time learning allows the AI to continuously evolve and improve during each interaction. For the AI to learn in real time, the matrix (AI brain) must allow training while also answering to your requests. Additionally, data learned must be synchronized with peering edges,” says Blum of PubNub. “This logistical challenge has led most networks to exclude real-time learning.”
When those challenges are overcome, however, that will open up the door to even more advanced Edge AI applications, Blum says.
5. Edge AI is data-hungry
This whole process works only if you have enough data to build a statistically relevant model, says IDC’s McCarthy. “Many companies do not meet the minimum requirements, whether it be in terms of volume of historical data or the right kind of data to achieve the desired outcome.”
6. Start by getting your data house in order
Most organizations have not built comprehensive data management practices and do not have these types of data sets on hand, according to CompTIA’s Robinson. “In addition, modern AI is based more on probability than previous software programs. The risk of incorrect or nonsensical answers is higher, and that risk increases if the training data is incomplete or biased in any way. Rather than being able to quickly install an AI component and reap the benefits, companies need to start with a thorough examination of their data.”
In the meantime, subject matter experts can substitute business logic for data-based learning that can be used in real time against multiple data streams, according to McCarthy, until the organization has amassed enough good data to fully take advantage of AI.
7. The cloud-to-edge architecture should be flexible and forward-looking
“As you define the architectures, make sure that you are designing for enterprise scale,” says Mann of SAS. “The cloud-to-edge architecture needs to support deployment of models, model changes over time, and transmission of data in a secure environment.”
Mann advises implementing architectures that are agnostic to chip sets, operating systems, and cloud providers to provide the greatest flexibility for sustained value over time.
MORE ON EDGE AND AI
- 7 edge computing trends to watch in 2020
- Edge computing by the numbers: 9 compelling stats
- How big data and AI work together
While not all problems are fit for edge AI, says Mann, “all IT infrastructure and architectures should be designed to accommodate analytics at the edge as advanced use cases develop. It’s important that you have an environment that can support the deployment of analytics in the required location for real-time or batch processes.”
Hope you enjoyed, if you did you may want to follow this blog for more good finds