Register here to receive future newsletters.
A change in how we see the network – getting virtual and active
One theme emerging at Mobile World Congress was the requirement to do something about collecting and managing network data.
The vision is broadly this – use virtualisation techniques to move data capture from centralised physcial appliances to a distributed, virtualised instrumentation of the network that can intelligently provide information flows to where they are required.
That is the vision. How to get there is more problematic.
NetScout was re-presenting its newly merged self following its acquisition of Tektronix Communications, Fluke and Arbor from Danaher Communiciations. Broadly speaking, the company has kept the TrueCall RAN analysis platform that Tektronix Comms had, via its own prior acquisition of Newfield Wireless. It has also kept hold of elements of Tektronix Communications’ own Iris Session Analyser suite, combining that with is nGeniusOne for a combined service assurance capability.
Director of Product Management John English said that the key focus will be providing assurance, instrumentation and analytics that can make sense of the amount of information coming from the network. The company’s virtual probes, based on COTS hardware, will be able to support “new, top-down modern workflow approaches” that curate data where it is collected and flow data that is essential, but retain the ability to go deep into forensic session analytics where required.
“You can’t move everything to the God Box,” English said, encapsulating the problem.
Also addressing the issue is Spirent, another company that is assembling its assets in a bid to provide a living, breathing view of service and network performance.
Spirent’s Ross Cassan said that customers are complaining of the number of probes they are supporting in the network, and of data overload – something that will only get worse as IoT connectivity takes off and video volumes increase.
Spirent’s approach is taking automated “active testing” from the lab to the network, an approach described by Cassan as a “better methodology” that can create a picture in real time for business units, if necessary feeding into the orchestration platform in a network. There’s still a requirement for probing capacity in the network, but Spirent thinks there is scope to automate test methodologies such as walk/drive testing, deploying virtual tests as VNFs that instrument the network upon any interface.
“It’s about going up the stack from L2-4, testing through to the real service,” he added. “You can do what was only available in the lab in the network.”
Procera’s Cam Cullen said that the advantage of virtual probes is that they can be placed into the network where needed, then moved or scaled up and down as required.
Cullen said that Procera has worked hard to provide feature parity between hardware and software probes, with no performance impact. That is not the case for other companies, he implied, which still have some hardware dependencies and need for acceleration.
As for what to do with the data you capture, the company was showing a GUI that allowed an operator to view a network in terms of its fitness for purpose per a certain application. So by using the DPI capability and knowing what application flows there are, it can build up a picture if latencies might be affecting a certain gaming app, or throughputs impacting on video experience. It gives operators a view of the actual likely customer experience per app, rather than just a red/green light on a network KPI.
Also thinking hard about how to capture and deliver information across the network for performance management and assurance is Accedian, which was introducing its new FlowBROKER product. At core here is a separation of the analysis from the data tap, so that a more distributed means of access network data can be deployed, but data can flow through the network. For much more on this, see Accedian’s Mobile World Congress Resource Hub.
Like This? Register here to receive future newsletters.