Vibe check: are we actually doing this?

Telcos are re-focussing on their core business, but this time as software engineering businesses, partnering where they need to. They're using open architectures to benefit from Composable IT that means they can operate as platform businesses. Yes we've heard it before, but at TM Forum's DTW-Ignite TMN asked, "Are we actually doing this?"

Can they do it? 

Tuesday am: Vibe. Here we all are then. 

The TM Forum/ DTW-Ignite crowd – to this observer’s eye a bit more diverse and also on average younger than their counterparts at previous events – settles into its seats in the L-shaped  “Studio Theatre”. A series of slogans scroll across the big screen. AI native. Autonomous networks. Data insights. Strategic collaboration is key. 

And then in the space in front of the screen is TM Forum CEO Nik Willetts, fronting up once more to explain that telcos need to change and change now. Lack of return on investment is crippling free cash flow. The ingredients are right for us to be crunched by the Innovators Dilemma and right now telcos are being left behind. AI only accelerates that change. Customers see us as just a carrier.

BUT – a carrier with remarkable assets. We can do it. Be courageous. Reimagine the role of the telco in the new AI era. Find your purpose. Revolutionary mindset. Be humble. It will take more than exposing APIs. Free people from legacy and bureaucracy. 

Here are three things you can do. Composable IT and ecosystems: the Forum’s Open Data Architecture (ODA) is now established as the industry plug and play reference architecture for software. But we need to go further, to standardised, re-usable building blocks enabled by AI. We’re launching component and canvas conformance for ODA. Evolving ODA to be an intent-driven architecture. 

Second, we need autonomous network operations. 30% of opex is tied to networks but autonomous networks have to go beyond cost reduction. We must break down network silos to deliver on AI-based automation of workloads. 

Finally, we need AI and data innovation. Returns on Gen AI are disappointing so far. The data we train on dictates how good the AI is. We need to surface the right data, apply the right governance models. Emergence of AI-native businesses is a profound shift. We’ve got a moonshot challenge to our Catalysts to drive 30% improvement on EBITDA just by using Gen AI. We’re taking our own medicine with AIVA, an AI assistant for our own knowledge base and documentation. 

Following Willetts is a double act between Harmeen Mehta – CDIO (Digital and innovation), BT Group and also vice chair TM Forum – and Marc Allera CEO, EE and Consumer Division, BT Group.

Vibe: Turn-around school co-heads present to prospective parents. 

Allera: We’ve built a new brand flagship that meant moving house from old platforms. Built a place where anybody can buy anything from us and they don’t need a subscription. We’ve got 10 million sign ups, out of a body of 25 million subscribers. (No we’re not saying how many EE id holders are not subscribers) This is an opportunity all telcos have – to play a much more meaningful role for our customers than we do today. We’ve become one of the biggest gaming retailers in the UK. 

Mehta: It’s platform thinking come to life. [A confusing analogy to Marriott and AirBnB that should probably have been cut.] We’ve achieved a huge consolidation of legacy platforms and tools. Shut down 90% of legacy. From 3% on cloud to 80% cloud. We offer only 100 customer plans (from hundreds) but can also hyper-personalise. We’re like an ecommerce platform that also sells telco, rather than just a telco platform. That’s why we’re getting great partnerships from companies.

Then it’s a panel of operator tech leaders. Jeremy Legg from AT&T, Scott Petty from Vodafone, Kim Krogh Andersen from Telstra and Abduzarak Mudesir from DT. 

Vibe check. Positive with a full task list. Breakfast (fruit, yoghurt) on arrival, but never time for lunch. 

For us as technologists it’s a really tough time.

AT&T: the goal is opening up networks to developer communities across the world, marketplaces that can tap into the customisable networks we offer. 

Deutsche Telekom: Optimistic for three reasons. Our network platform scales and we’re the only operator on both sides of the Atlantic (if you don’t count Telefonica – ed). Second is autonomous networks – we’re killing as many legacy processes as possible. We’re creating business, making the cake bigger for ourselves and the ecosystem. Converged networks, security, network APIs, slicing.

Telstra: Optimistic that we’re at inflection point. Connectivity never been more important and with AI that will just grow. Why is all the value in the compute layer? Cannot convince me that it’s more foundational than connectivity. Our job is to make that a great business that is treated with the respect it deserves. 

Vodafone: For us as technologists it’s a really tough time. Everyone is talking about the power of AI and our peers are impatient. Accelerate our ability to deliver value to organisations. All about velocity. ODA critical. Need to get from architecture into production, get Canvases to scale and move much faster. Gen AI is opportunity to be true platform company But need to get platforms and data architectures right. Also remember have to modernise. Huge legacy estates, business peers do not want five year programmes to deliver value. 

AT&T: Threat intelligence and security is where we are unique and can scale.

Telstra: We must understand our core business. Expose our core, why not using telemetry, to inform customer experience. 

DT: If you create this [AI and data infrastructure] in a way that it can be consumed by your own network functions then you start to define if it brings a business value for others to consume. But it starts from your own design of a network that’s much simpler, much easier to consume and to manage, and then you define a business driven decision what APIs you expose to which customer?

Vodafone: The platform is important. We’re great technologists, not good at commercial constructs. 

ATT: We’re individually not big enough to drive the ecosystem. We need common standards and development kits and then we can be global not regional. Folks that are not necessarily our friends leverage the fact we are regional providers

Telstra: I think telco will look different in 2030. We have protected ourselves by wanting 1-2-1 relationships with customers. But I believe in a layered model. We will be different kinds of leaders in each layer.

The hyperscaler that is good at embedding AI into the cloud is growing the most. There’s a lesson for doing that into our own core. That’s  our challenge.

Vodafone:  Thinking horizontal layers. We partner at each layer, make it composable. And every vendor in this world has to adjust their model. 

Telstra: Partners need to move away from seeing us as being professional buyers.

ATT: All companies have to mature to become professional software development shops. Have to take ownership of our own product roadmap to control destiny, but knowing we have to do so in partnership.

I haven’t met an autonomous network yet. Evolving towards it. So many changes happening. 

DT: Autonomous Networks is a north star. Nothing you can jump on today but there are steps we are taking. We don’t have to be moving at the same speed. In some areas we are behind. 

Immediately after, a conversation with George Glass, TM Forum CTO.

Vibe: controlled, confident. No need for coffee, thanks. 

Key question. What is the TM FORUM trying to achieve?

GLASS: 

Composable IT is what you need to build a modern telco. If you’ve got the composable IT plus connectivity and the ability to partner then you can start to build solutions there. We’re already seeing our members do something like build a charging and rating capability for telco and then go into renewable energy and, using exactly the same components, build a charging and rating capability to give customer credits for the generation of renewable energy.

Conformance: The whole idea is that as you build a solution you want to test that you’ve built it correctly. Iif I build a reusable component, I want to make sure that it conforms to the industry standard. So we provide through our APIs that verification that you’ve taken the design patterns and followed those correctly.  So as they build a component, the developers test it. And then when they’re happy that it’s ready to deploy then they get certification from us.

Autonomous networks: ODA for the network

What we want to do is standardise the management and operations of those network resources so that we can build and develop the level of automation that’s required for autonomous networks. 

We depict the ODA architecture as layers. The front end is what we call party management – managing your customer interactions. Then the central core layer is commerce management, which manages your product catalogue, product ordering and billing. 

And then the lower layer of the architecture is what we call the production layer and it’s the service and resource manager. And those services can be either IT services or network services, or network resources.

But the interesting thing is the network is becoming software, becoming programmable. What we want to do is standardise the management and operations of those network resources so that we can build and develop the level of automation that’s required for autonomous networks. 

The concept of an autonomous network is a self-healing, self-configuring network. But if you’ve got bespoke interfaces into every resource in the network, you’ll never achieve that level of automation. So you need standardisation before you can drive the level of automation that’s required. And then you need to embed AI into the architecture to actually put intelligence into the orchestration, management and operations of the network resources on the network domains. And that’s where expanding ODA with our service orchestration and then putting AI into it transforms the whole nature of the interactions between IT and network. What we’re trying to do is unlock network capabilities more rapidly and give us access to the programmable aspects of the network, which were not available into the IT space two to three years ago.. 

What we’re looking for is openness from the network equipment manufacturers and from the  operators and vendors that support the network ecosystem. Because we’ve got to standardise the management and operations. Monitoring and reporting needs to be standardised. The way you monitor a network today is to put a probe into it, which is a completely inefficient way to do it. Why does the network not just transmit a set of standard information to me that is harmonised across the entire industry and standardised for all network technologies?

Data standardisation

Data is heavy

It’s so important to have the right data in the right format, with the right accessibility and the correct permissions to use the data. If you’ve just been building your architecture over many years using different vendors moving from network technology to network technology, everything’s structured differently. And therefore AI is constrained in its capabilities because you’re having to do effectively a translation of the data to understand what it is before you can process it. What we’re looking to do though the work we’re doing is standardise the data, standardise the access to the data or even get the device or whatever it is to actually expose the data automatically. 

The other thing which is really important is data is heavy. If I’m trying to do real time AI and make decisions I don’t want to have to move data around. I don’t want to have to pull all of my data into a big data lake to then process it because even moving the data will take too long. I’ll have missed the opportunity. So if I can build my AI models that can actually get out federated data and make the decisions across a set of data lakes or data repositories, and then give that information and feedback in real time, I can suddenly unlock an awful lot more power from my architecture.

That’s exactly what we did in the IT space. We actually have the data models – and fortunately many of our members have adopted that information model. If they’re using our APIs then they’re following the standard data model which means that the AI in that space is actually easier. I’m not going to say it’s easy. 

Wednesday afternoon (this isn’t chronological). Tech Mahindra. President of Communications, Abhishek Shankar.

Vibe: Corporate gilet. Safety. 

Focus on the core. Key question. How to partner.

I see this significant realisation that what you build is what your core is. If you have something which is a distinct adjacency to the network, that is where you have to partner

Examples of change. The American telcos have removed long term distant assets of media and cloud and are focussing on their core. And that’s been a big piece of change. If you think about it, that focus to come back and drive your capex into the network –  seven out of ten dollars – it needs discipline, it needs focus, and taking all the distractions away in the American operators is very palpable. All these things are really markers of this mindset of change. 

Something which I have been communicating to all our clients is that telcos have to have a very clear build-buy-partner strategy. When I started our conversation, I talked about the fact that partnership was an afterthought. I believe now I see this significant realisation that what you build is what your core is. If you have something which is a distinct adjacency to the network, that is where you have to partner – whether it’s on security, Cloud, providing IT services to SMB markets. Now in the partner realm, where we are helping telcos is that we help them focus on their core. By simplifying the code. We help modernise the entire stacks which run this. 

That’s the conversation I’m seeing that telcos are focusing back on the core. Seven out of $10 will go into the core, another $2 will go into securing these networks, and then partner with others to drive new revenue monetization. 

And those new revenue growth vectors could be sovereign cloud, it could be citizen chatbots. It could be language models as a service. These are the things which we are seeing successfully amongst our Asian partners. 

If you want to do network as a service, you just need to design the partnership well in advance. 

The way we see it is that anything which is capex intensive, the telco you will be very good at building and rolling it out. But anything which is low capex intensive which means a higher amount of customisation – that’s where you partner. That’s something which we are seeing. The LLM example of Indonesia is just one example which is more public but it’s a common pattern.

Tuesday lunchtime. AWS: Chivas Nambiar, GM for the telecom industry.

Vibe: Courteous with a full schedule. Humble guest at telco event.

Key question. Can telcos do this platform thing?

I think what they’re looking for is tools and capabilities that help them move faster on this journey. And that’s really where we spend all of our time with them. We are trying to figure out how to help enable this journey.

There’s a lot of rigour in how networks are managed their national critical infrastructure. So I would argue that it’s not that telcos don’t have an understanding of what’s necessary. I think it’s where do they find the time and the capabilities to go do it and that’s what we’re partnering with them  on. 

Service co-creation is an interesting one. I think if you look at some of the conversations that happened at MWC, and since then, around network APIs, that is a big piece of where that cooperation is going to come in. There’s a second level of cooperation: how do you expose that out to a large developer community and a marketplace of developer communities? Plus give those developers access not just to network APIs, but all of the compute and the storage and capabilities like Bedrock so that they can build differentiated solutions on top of it – to take advantage of the network APIs? That’s a second kind of challenge that we that we see.

Wednesday afternoon. Campbell McClean, Chief Architect, BT

Vibe: Loving it. Belief. Wisdom worn lightly. No gilet. TM Forum’s first distinguished engineer.

Key question. What’s it really like meeting those “simplification numbers” and where does it get you?

The ballgame has changed. In BT I’m watching the balance shift.

It’s a rich and complex picture. We have very large volumes now sitting with ServiceNow, with Salesforce. In parallel we still have a very large lump of things on prem that we’re now starting to do other interesting things with. So we’re working with global hyperscalers to take about 85 of our future strategic entities that are currently on prem, run them through AI to move them to a two tier microservice construct, aligned against ODA, rebuild it, work out what percentage success we get and then use the human to compile the difference.

So we’re going to start that journey of feeding large volumes of legacy code through LLMs to build microservices aligned to the TM Forum for ODA. 

It’s an educated, risk managed guess but I think I know where we’re going to land. And that will accelerate getting to the cloud. Getting to the cloud in itself is meaningless. Cloud is effectively real time availability of environments, dump code, stick it in production and do something. 

We only exploit hyperscaler cloud if we have a software construct that exploits in depth, the non functional capability of the hyperscaler. So patching administration, security, scalability, you don’t want the human doing any of that and we have teams of people that do that patch stuff, de-bug all the time. You change the microservice construct and then AWS or Azure, or whoever your partner is can then do that – and you take the engineering skill and you then move that to delivery within the business of features. That’s one of the next things we’re going to do. 

Go back ten years this [event] was a vendor advertising pitch and you came here to have coffee with your buddies

ODA is effectively a blueprint for constructing your platforms to deliver common capabilities to any channel, to any customer, to any customer segment. And we can map the business capabilities to the software, literally to the software by using ODA as the framework for engineering and architecture. 

ODA is a much more engineering focused capability [than previous TMF frameworks]. And the bridging point was the open API. That was what drove what we would call bounded context. So I think it’s a really rich tapestry. I think the pivot that Nik Willets and George Glass drove was extraordinary. This made the TM forum relevant. Go back ten years this [event] was a vendor advertising pitch and you came here to have coffee with your buddies. Now I come here and in three days my understanding is probably 10% better because I see everything in context. But ODA was the context around which everything started. Nick Willets needs to build a gold statue of George Glass. I wouldn’t quote that, because I already said that to somebody else. [Sorry Campbell, we quoted it. It’s a nice thing to say.]

The ballgame has changed. I went to Airtel I had to refactor everything I did because we built our own stuff, within an engineering culture. It started in India, but it has permeated everywhere. It’s kind of an export of thinking. And it’s people looking at it and thinking they can do it, we can do it.

In BT I’m watching the balance shift. You cannot be an enterprise architect today without understanding how engineering happens at a detailed level. And you cannot be an engineer in isolation – you have to understand that what you build fits into the wider piece: engineering and architecture are becoming much, much closer.

I think the really interesting dimension of all of this is what is AI going to be in this space. What it’s really good at is understanding huge volumes of data and providing context and allowing you to see the signal. We had Big Data at BT in 2010, we had no idea of understanding what it meant. We didn’t have the machine that could distil it. Today, you throw an LLM at it, it starts pumping out really meaningful insight. Now how does that that impact our understanding of data? How do we bring that real time to understand your experience in this second? I think those are fantastic questions. And I think it’s really gonna make our business much more interesting. I love telco.

It’s going to bring the network really close to the customer. You think of the innovation, and co-innovation: we can bring that really close to the customer and that’s our asset. I think that’s fantastic. 

Also Wednesday. Hesham Fahmy. CIO. TELUS. 

Vibe: Tech. Builder. Velocity.

 Key question: What is the partnership model and opportunity 

I think a lot of people are just putting a label on it, but not living it because the real thing is hard to do.

We already have a 10 year strategic partnership with Google. It’s an $800 million partnership so we are all in on this and we look at them as as partners, not competitors in this space. And we’re very open and we always say it’s not just going to be Google, it’s all Hyperscalers we will look at. I think our approach will be no different than we even talk about OSS and BSS, right? There’s always this fear of a vendor lock in. And what we said is first let’s make sure that we’re only building what is truly our competitive differentiator, or truly something that we cannot build at the scale of someone else. So if it truly is something that is our secret sauce that’s the part we should build.

And then when we’re buying, we do worry a lot about if we go too far into a certain vendor that it becomes vendor lock in. And at some point we may regret that decision, what do you do then? And so then we’ve looked across the board saying where possible, do things on open standards, so you have the optionality.

Most of what hyperscalers are giving you is storage and compute. Anything we use we’ll always use it based on open standards, Open CNCF standards, say. And then when you go to the stuff that’s very bespoke to them, so for example, Google, they are our analytics platform because they’re very strong in their Vertex AI and Big Query, but again, that becomes this thing where there’s no way that we can compete and innovate at the same level they’re doing it and maybe that’s okay for that piece to be locked in with them.

Because that really isn’t the secret sauce. The secret sauce is the actual analytics that you run and the insights you get off it – and so it’s better to have our data scientists building the churn models, building the Gen AI and those solutions, rather then trying to build the LLM themselves.

But you are doing this by design, not so it organically happens. We are being very surgical. Even when you talk about the AI piece, it’s running in a combination of Azure and Vertex AI. We’ve seen certain workloads that are fit for purpose with the different hyperscalers or those different LLMs – whether it’s the GPT-4 or Gemini Pro.

What does ODA give you?

We’ve been running on ODA, we’ve been certified for a year and a half now. We made that conscious decision that any inter-system communication is always going to be TMF based and running on ODA.

Now it becomes a hard thing of getting all our systems to be ODA compliant. And so you know, Netcracker does run our a lot of our BSS and we’re moving our existing version of Netcracker onto the cloud version of it, which is fully TMF-compliant and ODA compliant. But we’re kind of just doing that across the board.

Composable IT? We’ve lived and breathed it and we’ve seen it pay off that way. Because when you start breaking things at the boundary, you start really having, again that optionality of saying okay, here’s the platform, and here’s the pieces that I can assemble. So as a business, here’s the value proposition, but you’re not stuck in these very tight, vertical stacks. You can build bits and pieces and put together a different value proposition. And we’ve seen that already in what we’ve gone to market in the last 12 months that’s been enabled by this ability to piece together different parts of our ecosystem.

Platforms give you leverage, right. So that’s the mandate I had here and that’s what we have been building.

I think a lot of people are just putting a label on it, but not living it because the real thing is hard to do. The reality is 99% of all telcos have legacy systems and they’re all best of suite.

They aren’t broken out, right? You got your whole suite. So you can say as much as you want that I’m going to be ODA compliant but at the end of day what’s backing it is still a big monolith. I am sceptical when I talk to my peers, how many people have actually been on that journey? Because unless you’re going to be able to break up those highly vertically integrated stacks you’re never really going to get the benefits of that Composable IT.

I’m not going to mislead you and say that means I just decommissioned everything. But what we’ve done is we know we have a very vertically integrated stack – but how do I start taking pieces of it out of it?