By Dr. Stefan Vallin, Director of Product Strategy, Netrounds

I am a boring old engineer who wants to make sure things work before making promises to my customers. When I hear hefty product or solution claims, I like to understand, or at least test, the technology before buying – or praising.

No one has missed the telecom industry’s big hope that Big Data and Artificial Intelligence (AI) will come to the rescue and improve the level of efficacy of service assurance systems currently available in the service provider domain. Service assurance systems today promise everything from real-time service health information, inferred from low-level resource event data (the “Big Data magic wand”), to digital customer service assistants (the “AI magic wand”).  The level of hopeful expectation and the corresponding lack of proof for these claims does cause me some concern.

Big Data is essentially nothing more than a toolset used to handle lots of data existing in various formats. AI and Machine Learning, on the other hand, are well-defined mathematical techniques and algorithms. However, when messaging the power of these technologies in our industry, people tend to transform Big Data and AI into a magic wand wielded by no one less than Gandalf the White (or perhaps a magic staff: a much debated topic on J.R.R. Tolkien fan boards).

If you read any textbook on these topics, two prerequisites are emphasized for the successful implementation of Big Data and AI projects:

  1. High quality and relevant input data
  2. Formulation of a specific question to be answered

Within our industry, though, these requirements have been conveniently adjusted to: “Throw enough data at the problem and the answers will magically appear.”

The conclusion drawn by some seems to be that throwing more syslog and traps, dressed up as telemetry data, into a big data lake will result in an accurate portrayal of network service health as experienced by the customer. In addition, the root cause of any problems will be pinpointed and fixed. Unfortunately, this is not true.

As Jon Ross, COO of Openet, said in an article published by Disruptive Asia, “Yet even the very largest US carriers are simply getting no return on their investments in analytics. They are not even dealing with ‘big data lakes, more like big data swamps’”1.

Now, if you are an engineer and are interested in a specific metric, you pick up a suitable measurement instrument and measure it if you can. When it comes to network service KPIs, you can actively measure them rather than trying to derive them from low-level and resource-centric data. This is important for two reasons:

  1. No magic is needed. You can gather end-to-end service KPIs, like MOS, latency, response times, and throughput, directly from the active measurement with known precision, cost, and effort. We call this Small Data. It can be used to directly answer some of the fundamental service assurance questions, such as “are we meeting the level of service quality that we promised?”
  2. With active measurements and Small Data, you also get the missing data input for Big Data, AI, and Machine Learning. You now know that you have the relevant service KPIs in your big data lake, and you also have the necessary training data for your Machine Learning algorithms.

I have written a white paper that looks at service assurance from the data perspective and in the context of Big Data and AI. The paper takes a step back and looks at the overall goals of service assurance – do we have the right data to achieve our service assurance goals? I also provide some down-to-earth definitions of Big Data and AI and outline the types of service assurance systems currently available.

After reviewing the white paper, Anil Rao, Principal Analyst and leader of the Service Assurance and Fulfilment practice for Analsys Mason, wrote, “I agree with the Big Data versus Small Data argument. The connection between data quality, AI and service assurance is important for service providers to understand when developing their service assurance strategies. High quality ‘Small Data’ generated from active testing and monitoring significantly bolsters the efficacy of service assurance, and underpins the Big Data and AI projects that are instituted to achieve the operational and customer experience goals.”2

I will close the blog with one of my favorite metaphors for this Big Data conundrum. It comes from a textbook written by Power and Heavin.3 We even made up a lovely cartoon for you to illustrate the story. Enjoy!

Analyzing big data […] reminds us of the story of the small boy who woke up on Christmas morning to find a huge pile of horse manure in the living room by the Christmas tree instead of presents. His parents discover him happily and enthusiastically digging in the manure. They ask “What are you doing son?” The boy exclaims, “With all this manure, there must be a pony here somewhere!”

 

References:

1: https://disruptive.asia/overhyped-technology-2017-spoiled-choice/

2: Email correspondence with Anil Rao regarding Service Assurance – In Need of Big Data or Small Data? white paper

3: Power, D. and Heavin, C. (2017). Decision Support, Analytics, and Business Intelligence, Third Edition. New York: Business Expert Press. ISBN 978-1-63157-392-7.