Service assurance solutions touting big data and AI capabilities are falling short of the mark – the answer may be small data, says a new Netrounds white paper.
By: John C. Tanner
See the article on Disruptive Asia here.
Big data and artificial intelligence are often touted by vendors these days as a one-two punch that will fix just about everything wrong with communications service providers. Collect enough data about your network, customers, processes etc, feed it into an AI-powered analytics platform and use the results to improve efficiencies, performance, customer experience, and so on.
The service assurance business is no exception – plenty of service assurance providers now talk up the benefits of big data and AI, and promise everything from real-time service health info to virtual assistants handling the customer service desk. But how realistic are those claims? According to a new white paper from Netrounds [PDF], that depends in large part on how well the vendor understands how both big data and AI actually work.
So, for example, it’s important to understand the difference between data analytics and data science, or the difference between machine learning and deep learning. It’s also important to understand what counts as ‘big data’ – which the white paper defines as high-volume data that is both unstructured and complex – and whether the data is high-quality.
This is one of the key points Netrounds wants to get across – one of the key misunderstandings of big data is that you can get the results you want if you have enough of it. This is only true if the data in question is also relevant and high-quality. In fact, you’ll get better results with a smaller amount of high-quality data than with a huge amount of low-quality data.
There’s a similar misconception with AI technologies like machine learning – throw enough data at it and the answers will magically appear. That’s not really how it works – if you want AI to provide comprehensive answers, you need to be able to ask the right questions, and you need to be specific.
The paper goes into detail about the roles of big data and AI in the context of service assurance, what questions should be answered by service assurance systems and what kinds of data can be pulled from the network as input to provide those answers.
The challenge is that many current service-assurance systems struggle to obtain that high-quality service-related data on network services, whether they use use traditional infrastructure-centric assurance tools or big data and AI technologies. Put simply, they’re missing a lot of relevant data, particularly service status data and service KPIs.
The white paper argues that a key missing piece of the puzzle is active testing and monitoring, the function of which is to determine if the service assurance solution is meeting the level of service quality promised. Active testing and monitoring tests services on the data plane and on all network layers (from Layer 2 to Layer 7), and also tests from the customer point of view so you can see how they’re experiencing the service. According to Netrounds, once you add that data to the service assurance stack, you’ve got the data you need to get the direct answers to your relevant questions of service quality compliance.
And here’s the thing: it’s not even big data. It’s ‘small data’ that provides direct value as explicit answers to critical service assurance questions. Put another way, you don’t need big data or AI to get those answers – the small data from active testing and monitoring will do nicely.
But that’s not to say big data and AI have no role in service assurance – according to Netrounds, small data can drastically improve the quality and relevance of the input data for big data and AI projects.