Alcatel-Lucent’s Bell Labs is to open an “antenna” site in Cambridge, UK, to research technology that will enable the next generation of video-centric networks.
The site, which will have about ten permanent employees, will feed into and interact with Bell Labs’ core sites. Alcatel Lucent CTO Marcus Weldon said that if successful the site could transition to a full site.
It has been located in Cambridge because Alcatel Lucent already has an asset based there (Velocix – a content distribution technology company it bought it 2009) and because it hopes to tap in to general excellence in video tech innovation in the location.
Weldon said, “This is the first time Bell Labs has expanded its innovation footprint in the UK, and there are very few expansions of Bell Labs historically.” The company will open one other antenna site this year – in Tel Aviv.
The company has recruited Bo Olofsson, most recently heading up the product research group at British Sky Broadcasting, to be Head Of Video Research.
So what will the new unit be researching and developing?
Olofsson said he expected initial research to look at four areas. First, advanced analytical capabilities to be able to provide scene-level metadata on all sorts of video. He also expects to see work on the “next step of optimisation of delivery of video over mobile networks.”
Then there will be work on how applications and the network talk to each other, so that applications and network can be aware of each other in terms of network conditions and context. An example of this might be using predictive analytics to know when a user might be about to enter an area of poor capacity or coverage (eg a train tunnel or congested site), and buffer more of a video to the app, so that it will play until the user is back into better conditions.
Finally, Olofsson said Bell Labs is looking at new ways of encoding video, also describing development of a concept called compressive sensing, where a signal or object – in this case a video – is given a mathematical representation to allow it to be reconstructed.
“Solve the problem of being able to source any content from any device, then being able to render and deliver it.”
Weldon said the overall driver of Bell Labs’ work in video tech will be to enable a move to massive cloud-based analysis and re-rendering of video to be delivered over a variety of network interfaces and to different devices.
Devices may have just a decoder with an IP interface to the cloud, where platforms access a wide range of user generated, sensor-generated and IoT [eg CCTV] content. This could be analysed in real time to provide a video picture of the world around us, he said, with object recognition built-in. (“Show me where are my keys?” Weldon postulated, “Who is where?”)
The vision is about building platforms and networks for the delivery of real time content, rather than just video per se.
Weldon said he wanted to see Bell Labs benefit from a more entrepreneurial culture, with researchers acting on their own initiative to propose research directions – but he said the overall question being asked of the researchers would be, “Solve the problem of being able to source any content from any device, then being able to render and deliver it.”
See also: Alcatel-Lucent Bell Labs Press Release.