Followers of TMN will know that we have long tracked the ways in which companies have competed to build capabilities that can give operators a live, holistic view of network state, application and service performance, and user experience.
As the story evolved it also became about using the data to feed closed loop, automated network optimisation systems. Further still and the goal is to build analytics platforms that could filter data and then feed relevant outcomes to orchestrators and controllers within the SDN-NFV architecture. That includes being able to verify and validate if changes have been made, update and monitor network topology, track and monitor application performance.
The latest round of acquisitions takes note of the need to assure performance where network functions are virtual, where operators are operating hybrid legacy and virtual networks, and where they are also changing the shape and capabilities of their radio networks. There’s also a looming need to provide “slice assurance” – essentially SLAs for network slices.
In this space operator adoption and willingness to try new entrants has contrasted with the underlying network infrastructure space
That journey has involved a great deal of acquisition and consolidation, and challenged the traditional “monitoring” and Network Management System (NMS) providers to adapt to virtualised and hybrid infrastructure.
Along the way it has given birth to new entrants, many winning interesting and large deals with T1 operators. Indeed in this space operator adoption and willingness to try new entrants has contrasted with the underlying network infrastructure space. Perhaps that’s a result of these systems having interfaces to IT and to marketing and Customer Experience management.
One company that has moved well beyond its original realm of test and monitoring as it sought to address the changing landscape has been EXFO. Last year at MWC its Founder and Executive Chairman Germain Lamonde was tying up final details on the acquisition of Ontology Systems. This year the acquisitive Canadian was putting the finishing touches on a deal to bring French service assurance and network optimisation company Astellia into the fold. So what is behind the deal-making?
It gives us the technology to create a new offering that is in line with the requirement of future telco
“We believe that as telcos are evolving networks to 5G and IoT, and to NFV-SDN, the challenge will be all about how we can drive efficiency in the network. Eventually that will lead to automation but to build that you need a wide base of knowledge of the network. Ontology made sense and it now makes even more sense in the context of Astellia,” Lamonde said.
“It gives us the most advanced capabilities for next gen smart networks, giving us scale with the customer base but also the technology to create a new offering that is in line with the requirement of future telcos.”
We can be the provider of the Hadoop data lake, on top of which we can create applications such as RAN optimisation or customer experience management
What Lamonde saw in Astellia was a combination or RAN analytics and optimisation with a proven vProbe capability to provide assurance within NFV deployments. That would give the ability to pinpoint problems with VNFs, to feed orchestrators with the right information to close loop between the status of network and the desired state to help drive the automation of the network.
“The way they built their NOVA analytics is incredible – a fundamentally strong Hadoop layer on top of which they create the app layer in a very open environment. So we can be the provider of the Hadoop data lake, on top of which we can create applications such as RAN optimisation or customer experience management. That’s really how operators can change to the SOC (Service Operations Centre).
“With NOVA complementary to Xtract, and Ontology providing the real time capability, we will have the ability to bring to market a vision and solution to create an automation loop to build smart networks and provide customers with the optimal experience.”
EXFO is also excited about matching the geo-location capabilities of NOVA Ran with its own C-RAN fronthaul monitoring solution SkyRAN. SkyRAN helps identify the source of cell site problems by analysing RF signals on optical links in Cloud RAN deployments. NOVA Ran can come and bring subscriber intelligence in each cell site in a more granular way.
Another Canadian company with a very recent acquisition in this area was Accedian. Accedian instruments a network with test agents, which can be embedded in network equipment as very small form factor plugins, or as virtual agents. It then brokers the data produced by these agents, and can orchestrate their operation automatically, to give operators a means to view network performance.
It very recently bought Performance Vision – the announcement clearing in mid February – to add in an application-centric view of performance, to add to the characterisation of the network it can structure through its SkyLIGHT solution.
Accedian wants us to think in terms of having the ability to monitor network slices, where the feedback loop is app-aware. That’s because monitoring of slices requires the ability to assure performance directly into industry vertical environments
In the virtual network environment, “performance management is a necessity, rather than something it is nice to have for marketing”, said new CSO and CMO Richard Piasentin (previously of Viavi and JDSU). “Monitoring network slices means the feedback system has to be app aware.”
Accedian’s claim is Performance Vision’s metadata analysis and storage architecture means it can keep track of every network flow and application transaction at scale, monitoring all network traffic and application transactions.
Accedian CEO Patrick Ostiguy said at the time of the acquisition, “The combination of Performance Vision and Accedian creates a proposition that is truly unique. There is no other company that is able to offer this level of accuracy and granularity into how the performance of the network and the applications running over it impact the end-user experience, in real-time, for enterprises of all sizes.”
Would you bet against Accedian? The company has a strong toehold with operators such as Jio in India and SK Telecom. Those deployments, often in virtualised instances, will have given it the insight it needs to know how to plug the gaps around assuring network slices for industry verticals and enterprises.
The ability to structure big data analytics has given another new entrant into this space a strategic role within a Tier One operator in a short space of time. That player is Cardinality.
Cardinality offers a way to structure the analysis of datasets taken from any data source and produce that for a number of different use cases. Its pre-defined “out of the box” use cases include Cell Experience, Customer Experience, Marketing Insights and Operational Intelligence modules.
At Telefonica UK, its flagship public customer, it currently has 10TBs of data loading a day, well over 50 billion rows of data a day, or one million a second.
Cardinality CEO and Co-Founder Steve Bowker contrasts that with Verizon’s entire big data deployment. Verizon has ten times the data resource personnel of Telefonica UK and many more subscribers yet generates only marginally more (60 billion) data rows a day.
The deployment has grown and expanded to a national footprint, with 3-4 expansion orders from an initial use case of minimisation of drive testing to supporting Telefonica UK’s NOC-to-SOC transformation. (An echo here of Lamonde’s claim for EXFO.) The engine creates a 250 strong KPI matrix, calculating and reporting on a regular basis KPIs for every subscriber in the network.
Cardinality also lets Telefonica Group’s own data scientists in Spain experiment and work to its UK platform, to potentially build its own use cases. One example is the creation of churn management algorithms. “They have run their own algorithms on our data and produced their own churn prediction. We developed our system using open APIs so that partners can use them anywhere.”
It solves a big problem with big data which is that you can get data into your data lake but you often can’t get it out.
So how does it work? Well, the Cardinality platform combines a process for ingesting data very efficiently with a quick and flexible method for writing the data to memory using Apache Spark as the data interrogation enabler. Data can be ingested and processed from network elements, probes and DPI, CDRs, sensors, geo-location data. It is then processed and parsed, enriched and analysed before being written into memory, from where it can be accessed by AI modules that act as specific applications. For example the “Cell Experience” solution analyses Cell level performance per subscriber, technology, service and frequency band.
“We can open the source, read it once, then that can be streamed and then written to any of four different storage methods, NoSQL, a relational database, Hadoop or cache. Or if an operator is not on Hadoop we can use an operators’ existing big database,” Bowker said.
It is the Apache Spark data processing that allows the very fast interrogation of the data lakes that Cardinality creates.
Bowker: “For Hadoop you need programming skills and the use of SQL-like interfaces to Hadoop. It’s more efficient to leverage Apache Spark. It solves a big problem with big data which is that you can get data into your data lake but you often can’t get it out.” For its users, Cardinality has a GUI that allows personnel to query data in a drag and drop manner.
In the future, Bowker says he plans to develop use cases using the same APIs to start looking at network slicing. “We could start doing intelligent decisions around slicing in the network to see where demand is going up and down and change the structure of the network dynamically.”
That takes the company into the realm of predictive analytics, which is the next chapter for the evolving story of network big data.