Executive Interview: ‘AI IaaS Will Be Hugely Significant for Businesses of All Sizes’

Аndre Reitenbach “Our ultimate goal is to create a service which will give even the smallest businesses access to the AI resources of a tech giant, across various fields such as finance, healthcare, manufacturing, and scientific research,” said Gсore CEO Аndre Reitenbach.

According to CDN and cloud infrastructure provider Gсore, also known as G-Core Labs, edge computing is the future of cloud services delivery and consumption. HostingJournalist sat down with Gсore CEO Аndre Reitenbach to talk about what it means for organizations and what benefits edge computing may bring.

“Edge infrastructure development has already become the industry standard for leading cloud providers. If 10 years ago everyone was talking about the cloud, now edge computing is in the spotlight. It’s an efficient and cost-effective way to increase the speed of data processing and a way to reduce the load on the network infrastructure. By placing server clusters as close as possible to the user, you reduce both latency and processing time. This is critical for industries such as GameDev, media and entertainment, including video streaming services, fintech, and e-commerce.”

“For all these businesses, it is important to provide users with a simultaneous instant experience. For example, to ensure a viewer sees the goal of their favorite team at the same time as the audience in the stadium while watching a video stream in another country, or allowing Dota 2 gamers in the US to seamlessly play with people from Korea, the UAE or Germany.”

“There’s no doubt that edge computing is on its way to becoming the biggest trend in the near future as the demand for high-speed delivery of heavy content will only grow over time. Among the main drivers now are the popularization of VR headsets, 8K video format and the development of cloud gaming metaverses. It will also be needed for IoT devices, driverless cars and other technologies.”

“As a final note, edge infrastructure also solves the issue of compliance with local legislation in the field of personal data storage. This is highly important for cloud services. So, our goal now is to provide the best network coverage possible worldwide with minimum delay and maximum reliability.”

Gсore together with Graphcore introduced an all-new global IPU-based cloud AI-Infrastructure-as-a-Service in June. What type of clients and use cases is it aimed at?

“Well, first of all, we would like to say that we are really proud that Gcore is the first European cloud service provider to partner with Graphcore to bring exciting innovations to a rapidly-changing cloud market. Artificial Intelligence is evolving fast, and users are looking to trusted technology partners for powerful AI cloud services that are highly efficient, easily accessible, and flexible, to suit their changing needs.”

“Our ultimate goal is to create a service which will give even the smallest businesses access to the AI resources of a tech giant, across various fields such as finance, healthcare, manufacturing, and scientific research. Working with Graphcore, we built a brand-new solution which is based on the integration of Graphcore’s groundbreaking IPUs with our AI Cloud. This cloud AI-Infrastructure-as-a-Service will support every stage of a business’s AI adoption journey.”

“To build this service we used the Graphcore IPU, which is a completely new processor specifically designed for AI computation. It is twice as powerful as regular GPUs and is designed specifically for processing Machine Learning. By embedding this into our secure edge cloud infrastructure, we allow businesses to easily scale their AI integration while saving on set-up and computing costs. It means that the power and flexibility of the IPU is available to anyone looking to take their AI computing to the next level – whether that’s accelerating their current workloads or exploring the use of next-generation models that require specialist systems designed for artificial intelligence. We support every step of the AI lifecycle, from development to deployment, and facilitate work involving machine learning and AI solutions including TensorFlow, Keras, PyTorch, Hugging Face, Paddle and ONNX.”

“We believe that this AI infrastructure-as-a-service will be hugely significant for businesses of all sizes. It has lots of integrated tools and resources to make building AI applications easier for developers and data scientists across finance, fintech, e-commerce and game development. It solves the problem of storing and preparing the data sets needed for AI and ML projects, with our AI infrastructure-as-a-service teams can start preparing data correctly for machine learning right away. By storing data in cloud storage, they will automatically have access to the data required for modelling.”

“Beyond tools and infrastructure to simply store data, AI infrastructure-as-a-service comes with tools built into the platform to help create and train models. The AI Cloud allows you to easily create and run Jupiter notebooks within the cloud, allowing teams to build out AI solutions easily – you can even automate the integration of ML code changes from multiple developers into a single project.”

“One advantage of this AI infrastructure-as-a-service that I would highlight is the easy access to Graphcore IPUs, which have been proven to be far more powerful processors for AI/ML tasks. This means AI applications running via this hardware will perform tasks faster and cheaper than alternative architectures.”

“The launch of the AI cloud will not only prove significant for end-users, but also for Luxembourg’s high-performance computing ambitions. Being home to this new platform helps cement Luxembourg as the heart of the European AI hub and fits into the European Digital Sovereignty ambitions of the EU as a whole.”

Almost a year ago, Gcore launched its bare metal offering. Can you indicate what exactly makes it a different solution than a regular dedicated server or a virtual server respectively?

“We offer Bare-Metal-as-a-Service that combines the high productivity of traditional dedicated servers with the operational simplicity of virtual machines under the IaaS model. It takes just a few minutes to deploy a bare metal node in the cloud, after which the client gets access to flexible, hybrid infrastructure, allowing them to use resources in the most economical manner possible. It can also be set up any way you want – meaning full management of the server and its resources on the client-side.”

“Unlike a virtual machine, a bare-metal server provides exclusive access to the server for a single customer so they have access to all of the hardware resources and all of the bandwidth. Bare-metal servers’ productivity is not only higher but also more predictable. They don’t use virtualization, which means that resources are distributed between the tasks of one single client. This allows evaluation of the load on the server and lets you know in advance whether a server will cope with a new service launch. Another reason these servers are attractive is that they provide fault tolerance and full access to hardware alongside high productivity and extra safety.”

“Some tasks demand low latency, high performance and full hardware control.
For example, in the gaming industry, with a predictable load, it is more convenient to keep the game server on a physical server, such servers are also very attractive for retailers and fintech projects. Some databases are also sensitive to disk system latency and CPU performance. For our customers, we provide the ability to use physical servers and virtual machines at the same time, allowing them to get the maximum flexibility and efficiency of their systems. At the same time, we retain the key advantages of the cloud – ease of configuration, management, scaling and hourly payment.”

Gcore recently launched a Managed Kubernetes service. What’s the importance of this service for your clients?

“To explain the importance of Managed Kubernetes, I need to touch on Kubernetes itself. Its operational benefits, including reducing the workload of the IT team among other things, have become increasingly apparent over the last few years. However, the complexity of orchestrating containers across clouds and on-premise deployments may prove overwhelming for many of the enterprises that would benefit from using it. That’s why platforms and services that make it easy for global enterprises to transition to Kubernetes are of great value in the marketplace today.”

“That’s why our new Managed Kubernetes cloud service, which is already available in 15 regions, is so important. It will allow you to use Kubernetes within our cloud infrastructure and easily facilitate work with clusters. The service gives customers access to ready-to-use Kubernetes clusters in minutes with one click via the control panel, APIs or Hashicorp Terraform, and allows you to automate application scaling and deployment in our secure cloud environment. It makes it possible to create clusters, manage the worker nodes through an all-in-one Gcore panel, and automate processes even more efficiently. So, in short, you get all the capacities of Kubernetes including a flexible infrastructure, while we take care of such routine tasks as deploying clusters and managing master nodes.”

Gcore lately deployed several additional network PoPs, in Warsaw and Johannesburg respectively. What’s the end goal for Gcore in terms of PoP density and how are you ensuring future network performance?

“Our goal is the comprehensive development of a global edge infrastructure. Today, Gcore’s public clouds are available in more than 15 locations, including Luxembourg, Amsterdam, Tokyo, Singapore, Manassas, and Santa Clara (USA). In addition to Warsaw and Johannesburg, new PoPs have recently been launched in Sydney, Chicago & São Paulo (Brazil). We intend to run at least 40 cloud points of presence worldwide as part of our global expansion strategy.”

“Speaking of CDN, it already has more than 150 PoPs around the world, and since the beginning of the year, we have deployed new CDN Points-of-Presence in the USA, Cyprus, Vietnam, Tajikistan and Guatemala. In the next three years, we expect to expand it to 250+ PoPs, as well as achieve minimal latency and better performance by optimizing the peer-to-peer network – here we have over 12,000 partners.”

What new Gcore products and/or services can we expect in the near future?

“First of all, we intend to further expand our global infrastructure. For example, in the next three years, we are going to launch around 100 new CDN PoPs. Likewise, we plan to increase the number of locations for our cloud infrastructure: right now, there are more than 15 of them but our goal is at least 40. We will also have new offices in the US, Singapore and the Philippines in addition to those opened since the beginning of the year in Poland, Serbia and Georgia.”

“As for new products and services, we will first of all devote a lot of effort and attention to the development of our new offering: a global IPU-based cloud AI-Infrastructure-as-a-Service. In the near future, we plan to cover five more regions with it across North and Latin America, Europe and the Middle East.”

“We also plan to increase the number of bare metal servers and further develop platform services including Managed Kubernetes & the AI platform.”

“As for cybersecurity, new traffic filtering centers will be opened to increase the network’s capacity to repel attacks. In addition, we will deepen the integration of our CDN with our cybersecurity solutions to significantly increase our ability to mitigate attacks and provide an unmetered fault-tolerant low-latency service for our customers. We also plan to further strengthen our DDoS and malicious bot protection capabilities.”

G-Core Labs data centeR

Total
0
Shares
Share 0
Tweet 0
Pin it 0
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

Executive Interview: Leaseweb is Now Managing Data Centers, Again

Next Post

Executive Interview: ‘Ceph Gets More Performant at Data Recovery as It Scales’

Related Posts