Vultr feeds on telco AI Factory ambitions

Does the telco industry need a lower cost GPU platform provider with a focus on AI-native operations? Yes, says hyperscaler competitor Vultr.

vultr topCloud and AI compute company Vultr has launched a platform that it says can help telcos operate their own AI factories, cloud and AI-native applications.

Recently we reported Vodafone CTIO Scott Petty commenting that the operator is choosing not to invest directly in GPUs as it is aware that it would be left behind by newer generations of technology by the time it was ready to run its own use cases and AI models. Petty said that this meant the operator’s strategy was to work with a variety of cloud and model partners, giving it flexibility and portability.

That’s music to the ears of Kevin Cochrane, CMO of Vultr, which operates a GPU provisioning solution and a composable cloud platform that he says has a “dramatically lower cost profile” than major competitors, 50-90% cheaper than hyperscalers on a core compute basis, according to Cochrane.

Vultr already works with Airtel in India and Singapore’s Singtel. With Airtel, Vultr is essentially offering a “turnkey” white label service. Singtel is integrating its Paragon platform with Vultr’s cloud GPU platform, meaning that Vultr users will be able to deploy large-scale clusters of NVIDIA H100 GPUs in Vultr’s Singapore region, deployed in Singtel data centers, starting in Q3 2024.

The cost inflation amongst the hyperscalers for core compute, storage and networking is outrageous

So who or what is Vultr?

The company was founded in 2014 by cloud network engineers, and according to Cochrane has scaled to over $100 million in revenue without taking any outside investment, funding “an entire multi billion dollar capex investment from free cashflow”.

Currently the company has 32 data centres around the world, reaching “90% of the world’s population within 2-40 milliseconds.”

That 50-90% cost differential is achieved by the high level of automation its engineers have achieved, Cochrane says. It is also boosted by Vultr’s model, which sees it focus just on provisioning and automating its cloud platform.

“The other very little secret, which is not such a very little secret in the industry, is that the cost inflation amongst the hyperscalers for core compute, storage and networking is outrageous. We can literally undercut them 50-90% and still be wildly profitable,” Cochrane says.

“And the reason for that is very simple: they [hyperscalers] don’t just do compute, network and storage, right? They do 200 ancillary other services.

By contrast, as Cochrane tells it, Vultr focuses just on core compute, with a marketplace of pre-built images and containers from third parties creating a composable architecture.

“The hyperscalers, however, they build their own vector database, they build their own search, they build all of their other services, and each of those services, product teams, infrastructure, costs money. So what covers the cost of those 200 other services? It’s the price of compute, network and storage.”

Simply put, we want to be one more vendor in the mix

Scaling up

The company is currently looking to boost its sales and marketing capability, says Cochrane, who himself joined as CMO in 2022.

“Nowadays, enterprises are truly looking to have a multi cloud strategy and have portability of their workloads across clouds, so they can get the best cost of performance advantage across all of their global operating regions

“Simply put, we want to be one more vendor in the mix – so that where a workload can be more cost efficiently delivered with better performance on our platform, we want to give that advantage to the enterprises.”

GPUs for the market

As a GPU specialist Vultr is a partner of Nvidia and is “the only cloud compute company that’s provisioning top of the line Nvidia GPUs at any scale in any of our 32 locations.”

That’s in contrast to today’s market which “heavily concentrates” GPUs amongst a few key players in the North American market, Cochrane says.

It is this element that Cochrane thinks can be especially attractive to telcos.

“Telcos, given the fact that they have the data centres, they have the power, they’re looking to identify how can they be the AI factory of the future. What is the next set of services that they can actually deliver for the purposes of AI?”

Vlutr enables that AI Factory concept by providing provisioning and an automation layer for GPUs.

“We take it one step further here at Vultr, because we do more than just provision GPUs. We call it AI native engineering. It’s the combination of CPU and GPU resources resident in telco facilities that need to be operated and orchestrated together.”

“Vultr is unique in this. You’ll have a lot of hot, high profile GPU-as-a-service vendors like CoreWeave that essentially is just an operating arm for Microsoft; they do the infrastructure for OpenAI, but not much else.”

“So we partner with telcos to help build and manage infrastructure enabling them to build, deploy and host AI-enabled applications.”

The AI conversation

For Cochrane, there are three big conversations emerging around AI and gaining access to GPU resources. The first is that enterprise awareness of the need for an independent cloud provider that can offer the set of services needed to deliver AI-native applications has opened up a conversation in the C-Suite about who is best placed to provision GPUs at scale.

The second conversation that has opened up is how vendors can help companies get value out of AI investments by helping them deliver against their targeted use cases.

“This is where you get to industry [verticals],” Cochrane says, “because AI and the deployment of AI is very use case and industry specific.”

“What we do with our partners is identify all of the services that need to be rapidly composed to deliver specific use cases. We give you the recipe and say here’s everything you need to know, here’s all of the collected services that you’re going to want to marshall from our marketplace, to deploy on our infrastructure to be successful.”

Vultr is not doing this alone.

“We’re doing this in concert with Deloitte, with Nvidia, with ecosystem providers like Qdrant from a vector database perspective, Console Connect from a connectivity perspective and more.”

Vultr is also tapping into Open Source innovation – its way of competing with the R&D budgets and teams of the major hyperscalers, with their packages of LLM models, tools and dedicated silicon development.

“The true innovation that’s happening in the industry is in the open source world – any open source innovation is immediately unlocked and available to operate in the Vultr platform. At the end of the day, open source will win. It will lead to more models that are application-specific, that are smaller, lighter, better performannt and better governed. A lot of the big models, like the Anthropics of this world, they got a lot of early money and earned a lot of capital trading lots of models. But what’s the long term viability of some of these smaller AI companies that generate a few tens of million dollars revenue at the cost of spending billions of dollars on cloud infrastructure? It’s not really clear what that economic model is.

“But if you look at the open source ecosystem, the proliferation of smaller and smaller models that are better and better performant – it’s amazing, because every week there’s something new that we get the test and deploy in our infrastructure.”

Enter the telco

And then the third conversation is one around compliance with in-country and regional rules on data privacy and cloud sovereignty.

“Now this is also where connectivity and telcos come in, because from the telco provider perspective, telcos have a ton of infrastructure. They have data center space, power. We partner with telcos to help them think through their strategy for building and deploying their own bespoke AI factories – so they can provide sovereign cloud and AI services, whether you’re in Germany and Switzerland or France.”

That know-how also goes to help telcos provision and operate those AI services.

“We provide the software automation layer to help them provision cloud services to their customers; because what a telco doesn’t know how to do is the provisioning of AI cloud services, which are inordinately complex and very unlike traditional cloud computing.”

Cochrane said that Vultr is working with SIs like Deloitte to gain vertical market expertise, leaving it free to be “experts in automating cloud infrastructure.”

“So we will work with like people that will directly partner with the telcos to help them stand up provision and monetise,  but we don’t do the services. We’re not going to hire and build a big telecoms division. We’re just simply the core infrastructure provider with a white label GPU offering, and we’ll look for others to help us take that to market.”