On the on Edge. Q&A with Dheeraj Remella

“The difference between real time and “near” real time is preventing 83% of frauds, preventing 100% of DDoS attacks, and increasing ARPU (Average Revenue Per User) by 275%.”

Q1. What are the key technical requirements of edge computing? 

As an umbrella term, Edge computing means something as close to the event source as contextually meaningful. This is an important way to think about this paradigm because it’s not just about collecting data from the sources but also using that data to drive important decisions and in turn taking actions. 

So, a functional “edge” location depends on this situation and the application’s needs. If the data source or edge is far from the system taking action on that data, then the application can’t depend on low latency to function well or serve its central purpose. 

Along these lines, we can boil down our edge computing requirements into two categories: 

From the application standpoint:

  • The path from data source to decision-making should be the shortest route possible to ensure the lowest latency possible.
  • The data and decisions must be consistent with the reality–i.e. the decisions must not be made on stale or inaccurate data.
  • You need to be able to measure the KPIs that are important for the process automation objectives of the applications.

From the operational standpoint:

  • The data platform must be cloud-native and be able to operate in private, public, or hybrid environments.
  • The solution must provide mechanisms for ensuring business continuity and platform resiliency against catastrophic events.
  • The data must be protected from loss in the face of system failures.
  • Given the constrained capacity of edge data centers, the solution must simplify infrastructure needs.

Q2. Why do we need to rethink our definition of real time in this context? 

When people think of real time, the term gets applied to each point solution in the bigger picture such as real-time database, real-time stream processing, real-time rules application, and real-time KPI aggregations. 

But when these point solutions interact with each other to provide value to a business process, they fall short in the total process “real-timeness”. So, when we talk about real time in the context of edge processing, it’s not one or the other part of the process but the whole process. 

The difference between real time and “near” real time is preventing 83% of frauds, preventing 100% of DDoS attacks, and increasing ARPU (Average Revenue Per User) by 275%. These are the kind of results our customers have attained by using a unified platform that provides true, in-event/real-time contextual decisioning. To achieve these kinds of results, the right data must be used to make the right decisions to take the right actions in under 10 milliseconds. From the operational perspective, this unification yields 10x increased processing capacity and 30% improved network efficiency. 

Q3. What are the challenges of implementing what you call “intelligence near the edge”?

The very first challenge that comes to mind is identifying which use cases will benefit most by moving to the edge—i.e. closer to the source of events. The second challenge is identifying the best possible edge location for that particular use case to allow for the fastest possible data ingestion and decision. The third but no less critical challenge is building a solution that ensures the low-latency requirement dictated by the use case’s automation SLAs while maintaining data accuracy and the best possible decision and providing a comprehensive resiliency model that can handle a wide variety of failures.

Q4. Can you give us some examples of applications that require “across context” intelligence ?

A few examples that immediately come to mind are:

  • Industrial process automation, where subsystems interact with each other as steps in a larger process. For example, one subsystem’s telemetry readings dictate what another subsystem must do to alleviate situations leading to unplanned downtime.
  • Windmill farms where all the windmills and their respective turbines at a particular location will be considered as one ecosystem due to the sharing of the meteorological conditions and perhaps manufacturer and model number and manufacturing batch number.
  • Solar power arrays, where being able to detect deterioration of the panels’ power output can be indicative of a general systemic problem either in the installation or the manufacturing process.

In general, most of the time a single stream of data from a single piece of equipment can be very narrow and will not provide much in the way of meaningful information. But when you combine multiple streams of data into a single, macro, context-building platform, high-level signals can manifest, leading to better and more comprehensive decisions.

Q5. You mentioned that if intelligence needs to be outside the devices, the edge devices are becoming just “vanilla telemetry devices”.  What do you mean by this?

Edge/IoT devices have historically been customized devices with built-in, firmware-level data processing. However, similar to the telco industry’s shift to using software-enabled functions running on commodity hardware, edge devices also need to have agility and scalability; but traditional, specialized edge devices require a much longer time to reincorporate into a system after going down. This extended downtime is highly detrimental to business operations. But if edge devices become simple telemetry devices— i.e. simple sensors reporting the data to an intelligence layer that aggregates the data from multiple telemetry streams in a software-enabled manner on commodity hardware—they will improve the agility of the system while significantly decreasing the amount of downtime.

Q6. You have said that there’s more that is expected near the edge than just a simple reduction of data set size. Can you please elaborate on that?

In the beginning, when data from the edge became of interest, it was more for the collection of the data for analysis. But the amount of data that gets generated at the edge is too massive to be able to economically send to the cloud. Hence the need for data thinning. Data thinning is collecting data at the edge and reducing the amount of data going to the cloud for analytics initiatives. A couple of ways this can be accomplished is via de-duplication and sessionization. 

But systems and organizations matured and now they are starting to use edge data in their process automation and modernization efforts, which means the digital twins of yesterday are evolving from subtle data collection points for analytics to being active participants in the business processes controlling their respective physical twins. 

This paradigm shift will bring together operational data and systems data together in the context of ML and AI for continuously evolving decisions for better process automation. This change will eventually allow us to optimize the use of edge data for intelligent decisions because data centers will collate and synthesize edge data from various “edges” into meta information and learnings, but of course, this is a bit further down the road. For now, using edge data and insights to drive better automation is the immediate need.

Q7. You have also said that you have understood that there is a massive value hidden by moving from near real-time to true precise real-time, with accurate decisions. What do you mean by this?

Each use case has different KPIs they track. Some track revenue, some track process efficiency, and others track things like infrastructure utilization efficiency. For example, a DDoS (Distributed Denial of Service) prevention partner of ours is able to thwart 100% of DDoS attacks on their customers’ infrastructures by incorporating rules to detect DDoS patterns evolving in real time. 

As another example, a telecom fraud prevention partner of VoltDB has been able to successfully integrate ML insights into decisions in VoltDB to be able to handle and prevent national scale frauds such as Wangiri and SIM-boxing. Again, as I mentioned earlier, in the customer management world, this has led to a 2.75X increase in ARPU for one customer; in the credit card fraud prevention world, it has led to a decrease of 83% in frauds while improving processing capacity by 10X for another customer. 

In the case of 5G charging and policy control, our partners have been able to help their telecom customers make their networks 30% more efficient. 

These are real-world benefits achieved by moving from near real-time processing to true real-time processing. 

That said, achieving results that truly allow an enterprise to get ahead of its competitors also comes down to the maturity of the organization and the intelligence they can incorporate into their automated decision-making. 

Qx Anything else you wish to add?

If you would like to explore how your organization can take advantage of real-time. low-latency decision-making on edge data, without compromising on data accuracy or consistency, please feel free to reach out to me at dremella@voltdb.com, DM me on Twitter (@dremella), or message me on LinkedIn (https://www.linkedin.com/in/dremella/).

……………………………………………………..

Dheeraj Remella is the Chief Product Officer at VoltDB responsible for technical OEM partnerships and enabling customers to take their next step in data driven decision making. Dheeraj has been instrumental in each of our significant customer acquisitions. He brings 22 years of experience in creating Enterprise solutions in a variety of industries. Dheeraj is a strong believer in cross pollination of ideas and innovation between industries and technologies. Dheeraj holds a bachelors degree in computer engineering from Madras University.

Sponsored by VoltDB.

You may also like...