On Cloud-based AI. Q&A with James Kobielus

Q1. What is cloud-based AI?

Cloud-based artificial intelligence (AI) refers to intelligent applications that are built, trained, deployed, and used in cloud-computing environments, including the Internet of Things (IoT).

AI is rapidly being incorporated into diverse applications throughout the cloud, especially in embedded, mobile, and IoT platforms. In the coming 1-2 years, the majority of new application-development projects will involve building the AI-driven smarts for deployment to IoT edge devices for various levels of local sensor-driven inferencing.

Q2. Is AI really becoming the brain driving cloud-native applications?

Yes. Software developers are embedding AI microservices to imbue cloud-native applications with data-driven machine-learning intelligence. Local inferencing is the foundation of all modes of edge-based AI applications, including various degrees of autonomous operation, augmented human decisioning, and actuated environmental contextualization. This inferencing will encompass the full range of decisions that may be required of edge devices, including performing high-speed correlation, prediction, classification, recognition, differentiation, and abstraction based both on sensor-sourced machine data, plus data acquired from clouds, hub gateways, and other nodes.

Q3. There are not yet widely adopted standard practices for incorporating AI into rich, cloud-native applications. What are the best practices for building, training, and deploying AI in cloud-native applications?

Developers are rapidly automating every last step of the AI development pipeline. More comprehensive automation is key to developing, optimizing, and deploying AI-based applications at enterprise scale. Data scientists and other developers will be swamped with unmanageable workloads if they don’t begin to offload many formerly manual tasks to automated tooling.

Every step of the AI pipeline process—from preprocessing the data and engineering the feature model through building and evaluating the model—is intricate. Connecting these steps into an end-to-end DevOps pipelines can easily cause the details and dependencies to grow unmanageably complex. Scaling up the pipeline to sustain production of a high volume of high-quality models can magnify the delays, costs, bottlenecks, and other issues with one’s existing ML-development workflow.

Automated ML refers to an emerging development best practice that accelerates the process of developing, evaluating, and refining machine learning, deep learning, and other statistically based AI models. These tools use various approaches—including but not limited to specialized ML models themselves—to automatically sort through a huge range of alternatives relevant to development and deployment of ML models in application projects. The tools help data scientists to assess the comparative impacts of these options on model performance and accuracy. And they recommend the best alternatives so that data scientists can focus their efforts on those rather than waste their time exploring options that are unlikely to pan out.

In a cloud-native computing environment, deployment of AI microservices is also being automated. This increasingly involves containerizing and orchestrating them dynamically within lightweight interoperability fabrics. Each containerized AI microservice would typically expose an independent, programmable RESTful API, which enables it to be easily reused, evolved, or replaced without compromising interoperability. Each microservice may be implemented using different programming languages, algorithm libraries, cloud databases, and other enabling back-end infrastructure.

For it all to interoperate seamlessly in complex AI applications, enterprises are increasingly deploying back-end middleware fabrics for reliable messaging, transactional rollback and long-running orchestration capabilities (such as provided by Kubernetes). Interactions among distributed AI microservices are increasingly stateless and event-driven within serverless computing fabrics.

Q4. Can AI help migrating legacy, monolith apps on to Cloud-Native?

Not really. This process typically involves refactoring those apps into modular components and recoding them into services for containerization within Docker, orchestration in Kubernetes, and other cloud-native environments. So far, I don’t see any tools for doing so that leverage AI to aid in application refactoring. It’s still a highly complex, expert process involving human judgment.

James Kobielus

Jim is Wikibon’s Lead Analyst for Data Science, Deep Learning, and Application Development. Previously, Jim was IBM’s data science evangelist. He managed IBM’s thought leadership, social and influencer marketing programs targeted at developers of big data analytics, machine learning, and cognitive computing applications. Prior to his 5-year stint at IBM, Jim was an analyst at Forrester Research, Current Analysis, and the Burton Group. He is also a prolific blogger, a popular speaker, and a familiar face from his many appearances as an expert on theCUBE and at industry events.”

You may also like...