In this article, I’ll look to provide some comparisons of public cloud vendors when deciding where to run Kubernetes. Obviously, this assumes that you’ve already decided that Kubernetes is the way to go. It’s important to understand the main features and capabilities of the main cloud providers and present what I think are some crystal clear criteria for choosing your target platform.
DIY or managed service? Before I get into public cloud vendors its important to highlight that Kubernetes is so modular, flexible, and extensible that it can be deployed on-prem, or in a third-party data center, in any of the popular cloud providers and even across multiple cloud providers. With a varying array of choices, what should you do for your business and your peace of mind? The answer, of course, is “it depends.” Should you run your Kubernetes systems on-prem or in third-party data centers. You may have already invested a lot of time, money, and training in your bespoke infrastructure. The challenges of DIY Kubernetes infrastructure become more and more burdensome as you need to invest time and operational cycles in standing up and ongoing daily management of the environment.
Or should you run your Kubernetes system on one of the cloud providers? You may want to benefit from the goodness of Kubernetes without the headache of having to manage it and keep it in tip-top form with upgrades and security patching.
What’s also important to note is that you’ll need to be already containerized — if you’re already there then great, taking that monolithic application to a brave new world is going to be a challenge but it does bring its benefits as you drive your business forward.
Choosing to run Kubernetes managed by your cloud provider is probably a no-brainer. You already run workloads in the cloud, right? Kubernetes gives you the opportunity to replace a lot of layers of management, monitoring, and security you may have to build and more importantly have the skillset to integrate with your processes and maintain yourself.
There are actually quite a few cloud providers that support Kubernetes and I’ll focus here on the Big Three: Google’s GKE, Microsoft AKS, and Amazon’s EKS and provide a view on what IONOS Enterprise Cloud is offering also.
Google GKE (Google Kubernetes Engine) Kubernetes, if you didn’t know already, came from Google. GKE is the managed offering of Kubernetes by Google. Google SREs will manage the control plane of Kubernetes for you and you get auto-upgrades. Since Google has so much influence on Kubernetes and it used it as the container orchestration solution of the Google cloud platform from day one, it would be really weird if it didn’t have the best integration.
GKE may be the most up to date on releases. On GKE, you don’t have to pay for the Kubernetes control plane which is important to bear in mind if controlling costs is important to your business, which I assume would be. So with Google, you just pay for the worker nodes. Google also can provide GCR (Goole Container Registry), integrated central logging and monitoring via Stackdriver Logging and Stackdriver Monitoring all be it very pricey, and if you’re interested in even tighter integration with your CI/CD pipeline you can use Google Code Build which will add even more costs, which is all great but as with most PaaS offerings once you get locked in you’re locked in, so the main thing to keep in mind is that flexibility is key with Kubernetes, most ancillary services can be bolted on to your hosted servers so you’re not stove-piped into using the vendors tools if you don’t want to be.
GKE takes advantage of general-purpose Kubernetes concepts like Service and Ingress for fine-grained control over load balancing. If your Kubernetes service is of type LoadBalancer, GKE will expose it to the world via a plain L4 (TCP) load balancer. However, if you create an Ingres object in front of your service then GKE will create an L7 load balancer capable of doing SSL termination for you and even allow gRPC traffic if you annotate it correctly, of course setting up your own Ingress Controller is also possible should the need arise.
Microsoft Azure AKS (Azure Kubernetes Service) Microsoft Azure originally had a solution called ACS that supported Apache Mesos, Kubernetes, and Docker Swarm. But, in 2017 it introduced AKS as a dedicated Kubernetes hosting service.
AKS is very similar to GKE. It also managed a Kubernetes cluster for you free of charge. Microsoft invested a lot in Kubernetes in general and AKS in particular. There is strong integration with Active Directory for authentication and authorization, integrated monitoring and logging, and Azure storage. You also get built-in container registry, networking, and GPU-enabled nodes. One of the most interesting features of AKS is its usage of the virtual-kublet project to integrate with ACI (Azure Container Instances). The ACI takes away the need to provision nodes for your cluster.
Setting up a cluster on AKS takes a long time (20 minutes on average) and the startup time has high volatility (more than an hour on rare occasions). The developer experience is relatively poor. You need some combination of a web UI (Azure Portal Manager), PowerShell, and plain CLI to provision and set everything up.
Amazon AWS EKS (Elastic Kubernetes Service) Amazon was a little late to the Kubernetes scene. It always had its own ECS (Elastic Container Service) container orchestration platform. But, customer demand was for Kubernetes was overwhelming. Many organizations ran their Kubernetes clusters on EC2 using Kops or similar eventually AWS decided to provide proper support with official integrations. EKS today integrates with IAM for identity management, AWS load balancers, networking, and various storage options.
AWS has promised integration with Fargate (similar to AKS + ACI). This will eliminate the need to provision worker nodes and potentially let Kubernetes automatically scale up and down for a truly elastic experience. Note that on EKS you have to pay for the managed control plane. If you just want to play around and experiment with Kubernetes or have lots of small clusters that might be a limiting factor.
As far as performance goes EKS takes 10–15 minutes to start a cluster. EKS is probably not the simplest to set up as with AKS you’re moving between the management consoles, IAM and CLI to get the cluster up and running, it’s probably the most complex setup out of all the three cloud vendors so in reality, it could take a little under an hour from the initial deployment to getting the cluster up and running.
IONOS Enterprise Cloud So what about the other vendors, well there are quite a few from the likes of Oracle, IBM and Digital Ocean there is also IONOS Enterprise Cloud. If I was to compare how we IONOS fared against the top three then I would say there is some catch up to make with ancillary PaaS services, but for creating a cluster and providing worker nodes to the cluster then IONOS does this with ease and simplicity actually much better than the competition. IONOS has UI integration with the data center designer which is missing from the top three providers, it’s such a simple process to get up and running that clusters can be ready to use in under 15 minutes.
Having the ability to choose the amount of CPU and RAM is a huge deal, you’re not forced into certain sizes for your worker nodes, adding and removing worker nodes is simple too, just remember to drain your nodes before you remove them. IONOS also has full API ingratiation, in fact, a cluster and worker nodes can be up and running with four API calls. With IONOS you get dedicated CPU and RAM resources so performance is a given. IONOS also brings GDPR compliant cloud infrastructure without having to worry about the US Cloud Act which should be top of your list for cloud service requirements.
There are also services such as persistence volumes in the shape of HDD and SSD storage and load balancer services just like the other vendors, with services on their roadmap to come, also as it’s vanilla Kubernetes, it’s easy to add things like Istio, Prometheus, Grafana and Ingress load balancers too. I’ve not even touched on cost yet but compared to the other vendors IONOS comes under the competition reserved instance pricing too, making it very attractable. Here are some rough figures though to help you determine costs when choosing a Kubernetes platform. This monthly cost comparison assumes that you have 3 master nodes, 15 worker nodes, and each node has 4 vCPU and 16GB of RAM.
Google Cloud Platform
(Free Control Plane)
(Free Control Plane)
(Free Control Plane)
D4 v3 4
(2 Dedicated CPU Cores)
Conclusion Kubernetes itself is platform agnostic. In theory, you can easily switch from any cloud platform to another as well as run on your own infrastructure. In practice, when you choose a platform provider you often want to utilize and benefit from their specific services that will require some work to migrate to a different provider or on-prem.
There are a number of container orchestration tools out there with the likes of Rancher, Swarm etc. it looks like Kubernetes has won the container orchestration wars. The big question for you is where you should run it. Usually, the answer is simple. If you’re already running on one of the cloud providers then check to ensure that your vendor is the right choice, this is where multi-cloud is giving you benefit allowing you to leverage the best the cloud has to offer so you can run your Kubernetes cluster with confidence.
Backup Exec 16 Feature Pack 2 provides S3-compatible cloud storage functionality. Customers can use IONOS S3-compatible cloud implementation with Backup Exec. When the configuration process is complete, you can create a storage device within the Backup Exec console that can access mostS3-compatible cloud environments. S3-compatible environments that are not specifically listed in the Backup Exec 16 Hardware Compatibility List are considered AlternativeConfigurations. The Backup Exec 16 Hardware Compatibility List definesAlternative Configurations as:
Configuring IONOS S3-Compatible Cloud Storage with Backup Exec
Configuring IONOS S3-compatible cloud storage using the S3 Cloud Connector in Backup Exec 16 FP2 is a two-step process:
Create a cloud instance for your cloud – requires pre-configuration of a user account and buckets in the cloud environment. The cloud location and configuration parameters must be provided to the Backup Exec server by configuring a cloud instance using the Backup Exec Command Line Interface (BEMCLI) (see Creating a Cloud Instance for S3 Compatible Cloud).
Create a cloud storage device – in the Backup Exec console by using the storage device configuration wizard and providing the account credentials that can access the S3-compatible cloud location.
S3 Cloud Pre-Configuration Requirements
In the cloud environment, create an account for Backup Exec read/write access. The cloud account credentials, known as the server access key ID and secret access key, must be provided in the Backup Exec console to create the storage device.
The cloud environment must also have buckets configured for Backup Exec use. Buckets represent a logical unit of storage in a cloudenvironment. As a best practice, create specific buckets to useexclusively with Backup Exec. Each Backup Exec cloud storage device mustuse a different bucket. Do not use the same bucket for multiple cloudstorage devices even if these devices are configured on different Backup Execservers.
Bucket names must meet the following requirements:
contain lowercase letters, numbers, and dashes (or hyphens)
begin with a dash (or a hyphen)
Bucket names that do not comply with the bucket naming convention will not be displayed in the Backup Exec console during storage device configuration.
Creating a Cloud Instance for IONOS S3-Compatible Cloud
To create a custom cloud instance for an S3-compatible cloud storage server use the BEMCLI command “New-BECloudInstance”.
To run BEMCLI on the computer on which Backup Exec is installed you can
the taskbar, click Start > All Programs > Veritas Backup Exec >
Backup Exec Management Command Line Interface
PowerShell, and then type Import-Module BEMCLI.
From the BEMCLI command line interface run the New-BECloudInstance command with the required parameters, for example: New-BECloudInstance -name “IONOS-Enterprise-Cloud” -Provider”compatible-with-s3″ -ServiceHost”s3-de-central.profitbricks.com” -SslMode “Disabled”-HttpPort 80 -HttpsPort 443
Name of the new cloud instance. Cloud instance name must match BE
names can contain letters, numbers, and dashes (or hyphens).
names cannot begin with a dash (or a hypen).
Specifies the provider name of the cloud instance. For s3 the
provider name is ‘compatible-with-s3’.
Specifies the service host of the cloud instance. ServiceHost should be
unique for each cloud instance that is created on the Backup Exec server.
Specifies the SSL mode that Backup Exec will use for communication with
the cloud storage server. The valid values are:
not use SSL.
SSL for authentication only.
Use SSL for authentication and data transfer also.
Note: Backup Exec supports only Certificate Authority (CA)-signed certificates while it communicates with cloud storage in the SSL mode. Ensure that the cloud server has CA-signed certificate. If it does not have the CA-signed certificate, data transfer between Backup Exec and cloud provider mayfail in the SSL mode. Users may choose to opt out of SSL and set SSLMode asDisabled.
To confirm the command completed successfully, run the BEMCLI command“Get-BECloudInstance”. The parameters of the newly configured cloud instance will be displayed. Ensure that the ServiceHost points to the correct S3-compatible cloud implementation, the provider name is accurate and the SSL mode is set correctly. If any parameters are not correct, rerun the New-BECloudInstance command with the corrected parameters.
Creating a Cloud Storage Device forS3-Compatible Cloud
To configure a storage device for an S3-compatible cloud in Backup Exec:
1. On the Storage tab, in the Configure group,
click Configure Cloud Storage.
2. Click Cloud storage, and then click Next.
3. Enter a name and description for the cloud storage device, and then
4. From the list of cloud storage providers, select S3, and then
5. From the Cloud Storage drop-down, select the name of the
instance created with BEMCLI
6. Click Add/Edit next to the Logon account field.
7. On the Logon Account Selection dialog box, click Add.
8. On the Add Logon Credentials dialog box, do the following:
the User name field, type the cloud account access key
the Password field, type the cloud account secret access
the Confirm password field, type the cloud account secret
access key again.
the Account name field, type a name for this logon
The Backup Exec user interface displays this name as the cloud storage device
name in all storage device options lists.
9. Click OK twice.
10. Select the cloud logon account that you created in step 8, and then
11. Select a bucket from the list of buckets that are associated with the
server name and the logon account details you provided in earlier screens, and
then click Next.
12. Specify how many write operations can run at the same time on this cloud
storage device, and then click Next.
13. This setting determines the number of jobs that can run at the same time on
this device. The suitable value for this setting may vary depending on your
environment and the bandwidth to the cloud storage. You may choose the default
14. Review the configuration summary, and then click Finish.
Backup Exec creates a cloud storage device. You must restart Backup
Exec services to bring the new device online.
15. In the window that prompts you to restart the Backup Exec services,
click Yes. After services restart, Backup Exec displays the new
cloud storage location in the All Storage list. If the S3-compatible cloud
environment is not displayed in the Backup Exec storage device configuration
wizard or console, use BEMCLI to ensure the parameters for the cloud instance
Once the S3-compatible cloud storage device is configured in Backup Exec, you can target backup, restore and duplicate jobs to the cloud server. As a best practice, test backup and restore operations should be completed before running regularly scheduled jobs. Backup Exec Data lifecycle management will automatically delete expired sets from the cloud server.
With the Enterprise Cloud, you receive a modern IaaS platform for cloud computing—highly available, secure, reliable, and with fast software defined networking. This means you receive precisely the virtual IT infrastructure that your company actually needs. The drag and drop feature in our Data Centre Designer allows you to put together the resources for your customised virtual data centre, without any rigid, prefab packages.
Our live vertical scaling gives you the option of flexibly adding new capacities and components to your virtual infrastructure – at any time, on short notice, and without rebooting the system! This is what makes the Enterprise Cloud by 1&1 IONOS one of the most attractive corporate cloud solutions available anywhere on the market.
Always ask before entering into any contract, “How do I get my data out in the future if I need or want to?”
Cloud vendor lock-in is typically a situation which a customer using a product or service cannot easily transition to a competitor. Lock-ins are usually the result of proprietary technologies that are incompatible with those of its competitors and it can also be caused by inefficient processes or constraints among other things. I’ve seen many customers come up against this in the past with traditional data centers where their storage vendor or hyper-visor solutions locked those customers into fixed solutions which inhibit the customer to be agile in moving to new technologies. The cloud albeit public or private can be no different when it comes to using lock-in techniques for retaining its user base.
Fear of Lock-in
Cloud lock-in is often cited as the major obstacle to cloud service adoption. there are a number of reasons why a company may look to migrate to the cloud, most often its all about reducing the physical infrastructure that they have in their data centers, cloud gives them the agility their look for, additionally reducing not only the CAPEX but also the OPEX required for the ongoing maintenance of the systems.
There’s also the question of how they should migrate to the cloud , the complexities of the migration process may mean that the customer stays with their provider which could also mean there’s a compromise in that their current provider doesn’t meet all their needs and limits the agility of their IT and value it provides to the business.
In some cases during the migration to another provider it may be required to move the data and services back to the original on-premises location which in itself may be an issue as the original architecture may no longer be available or the data center is now reduced in resource availability and prohibits such an action. Further more the data may of been changed to allow its operation on a particular cloud vendors platform and would need to be altered again to run on an alternative cloud platform.
Cloud vendor lock-in
Its only natural that cloud vendors want to lock you in after all they’re there to make money and need you to stay with them, they work at ways to keep you using their services and try to ensure that migrations are not an easy task. their customers often don’t know the impact until they try to migrate and can be devastating when it happens. Due to these challenges migration services from third party vendors are becoming a common occurrence and turning into lucrative business.
Taking the leap
Most companies I’ve talked to recently have similar experiences when looking to migrate from their current cloud vendors, the majority were unhappy with the perceived costs of using cloud infrastructure after all cloud was suppose to be cheap but the ROI was taking longer than first anticipated. The cloud vendors support services were a close second due to the lack of any personal experience offered from their vendor, i guess there’s only a number of times that “Take a look at this FAQ” is going to help.
One of the other major problem with cloud vendors is that you typically need to over allocate already inflated resources to the services you are providing as cloud resources are most of the time shared with other users of their services. its a bit like a house share, the last thing you need is someone hogging the bathroom.
PaaS services were also another reason, whilst PaaS is great in reducing the OPEX of the underlying infrastructure and application or database services it does start to get expensive with large number of API gateway calls which if unplanned for can be a bit of a surprise when you get your invoice, add to that one clouds PaaS may not be inter-operable with another so some type of data cleaning is going to be needed.
GDPR (there I’ve said it) was another reason which raised its head especially if the vendor was US based then the C.L.O.U.D. Act comes into effect.
If your using a US based provider then your data is no longer private as is can be handed over to the US government if they deem any suspect need to, oh and hosting in a different region outside of the US doesn’t help either so using a Irish region will not allow you to escape the act. The last time I check the big 3 public clouds are all US owned but if you believe that this may not effect you then you don’t need to look too far to see it in action, I’m sure we all remember Cambridge Analytica and the Facebook debacle that company had to hand over its data and now no longer exists! Taking up a hybrid cloud approach and using a dedicated European provider with multiple region support will help avoid this.
One company that I spoke to had a concerning case in that their cloud vendor had no export facility for the data and had challenges on how to cleanly extract the data, this challenge was compounded even more as the tax man also called in an audit on their accounts during the migration phase and had to take a hit on a penalty as the accounts were not available at the time of the audit. The whole process was painful and time consuming and they surely learnt a lot from the experience.
And the moral of the story is …..
Ask the important questions, “How is the data securely stored?”, “Who has access to my data”, “How is my data protected?”, “Do I need to modify my data so the cloud vendor can store it?” and most importantly “How do I get my data out in the future if I need and want to?” In most cases getting your data out is going to cost you but knowing that’s its possible is half the battle. if your new provider has tools to make it easier for you then that’s even better.
Be aware of the existence of the CLOUD Act and its potential implications for your business. Adopt a hybrid cloud strategy, which clearly defines which data can be stored in public cloud services, and what should be stored in data centers operated by European managed service operators. If you have large amounts of customer data, and would like to alert them if you do get a request to hand over personal data under the CLOUD Act, you might want to consider adding a warrant canary clause on your website.
In the first series on this post I looked at Azure VMs and provided a comparison with IONOS Enterprise Cloud, in the second part we looked at AWS, this final post of the series will look at comparing Google Cloud Platform (GCP).
As a bit of background in case you haven’t read the first or second parts yet, I’ve been working with the major cloud vendors for some years now and for me performance has always been a key factor when choosing the right platform, I’ve always struggled in finding the right balance of cost vs performance when choosing the right platforms and have created this blog to highlight some of the differences.
I’ve just started a new role as Cloud Architect for 1&1 IONOS Enterprise cloud and one of the main factors in coming here was the technology and some of the claims that it makes especially with performance and simplicity. This blog will highlight those performance claims and also the cost benefit that choosing the right cloud provider will be for you.
For these tests I’ve kept it simple, I’m using small instances that will host microservices so cost is one variable but performance is another, I will be creating an instance with 1 vCPU and 2Gb RAM, this system will be a base line for testing and I will use Novabench (novabench.co.uk) for some basic CPU and RAM performance modelling. There are so many tools out there but I find this one real quick and simple to test against some key attributes I will also be using the same tool for the instances so not unbiased results too.
So on with the comparison and next up is GCP for this I’ve selected a custom VM size as this is as near as consistent with other instances on the clouds I have been testing, The CPU used is an Intel Xeon 2.3Ghz and the price for this including windows server licensing and support costs comes out at £50.64 per month
For IONOS Enterprise Cloud I’ve also selected a similar spec as GCP which is a 1 CPU and 2Gb RAM and have used the Intel Haswell E5-2660v3 based chip for the OS as this will be as close to the custom VM in GCP, Like GCP I’ve also included the Windows Server license cost in the subscription along with 24/7 support which is actually free. The monthly cost for this server is £59.18 so comparing costs of using IONOS Enterprise Cloud there is a slight benefit of using GCP as you would save £102.48 over the year, so looks like GCP has a cost edge over IONOS, so what about the performance.
First I wanted to see how the external and internal internet connectivity was performing, to no big surprise IONOS way out performed Azure by a factor of 2, which is to be expected given the infrastructure back end design running on InfiniBand and the datacentre interconnects. The download speed was comparable for Google which you would expect from the internet giant.
Next the focus turned to CPU, RAM and disk performance for this I ran the Novabench performance utility and performed tests on both servers, the tests did throw up some major differences between the two. Let’s take a look at GCP first
GCP custom 1 vCPU & 2GB Ram VM Novabench Results
The GCP results were interesting to a point that twice as much resources are to be required to get to the same level of the IONOS instance.The GCP instance had a more or less half that of a score for its CPU, RAM and Disk benchmark compared to IONOS but it must be noted that the GCP resources are shared resources instances being hosted on GCP, the RAM score was also at a much lower throughput with a difference of 11964 MB/s, but what was noticeable was that the disk read and write performance was half that of IONOS. the write speed was not what would be expected from SSD storage.
The IONOS Enterprise cloud exhibited near twice the values from the results to GCP.
IONOS Instance Novabench result
Due to the dedicated resources that are used by IONOS Enterprise Cloud it becomes apparent that other Public Cloud vendors have to double (GCP & AWS) or even quadruple (Azure) their resource configurations to be comparable in performance to IONOS. Comparing GCP to IONOS to catch up to a similar performance of that of IONOS Enterprise Cloud the GCP instance would need to be reconfigured to a custom VM with 2 vCPUs and 4Gb RAM size this is 2 times the resources of the IONOS Instance which would increase the monthly cost to £94.57 which would equate to £1134.84 for the year of which you would have to pay an extra £423.96 per year for an equal performance instance of that of the IONOS instance.
GCP custom 2 vCPU& 4GB Ram VM Novabench Results
Can you really justify that type of expense of spending an additional £400 per year for just one system for the same performance? IONOS Enterprise Cloud provides dedicated CPU and Memory and is surely the way to go.
Don’t just take my word for it, give it a go yourself, I’m sure you’ll be impressed with the results.
I’ve been working with the major cloud vendors for some years now and for me performance has always been a key factor when choosing the right platform for Infrastructure-as-a-Service, I’ve always struggled in finding the right balance of cost vs configuration when choosing the right platforms and have created this 3 part blog to highlight some of the differences I’ve seen between Azure, AWS and Google Cloud.
I’ve just started a new role as Cloud Architect for 1&1 IONOS, working in the Enterprise Cloud division, and one of the main factors in coming here was the technology stack and the surrounding network settings and some of the claims that it makes especially with performance and simplicity. This blog will highlight those performance claims and also the cost-benefit that choosing the right cloud provider will be for you.
For the tests I’ve kept it simple, I will be using small instances that will host eventually host microservices with Docker so cost will be one variable but performance is another, I will be creating an instance with 1 vCPU and 2Gb RAM, this system will be a baseline for testing, I will use Novabench (novabench.com) for some basic CPU and RAM performance modelling. There are so many tools out there but I find this one real quick and simple to test against some key attributes, I will also use the same tool for all the cloud vendors instances so this should show unbiased results too.
Let’s start by looking at Azure and for this I’ve selected the A1_v2 size as this consistent with other instances on the clouds I will be testing, The CPU used is an Intel Haswell E5-2673 v3 and the price for this including windows server licensing and support costs comes out at £62.20 per month
Azure Pricing calculator for A1_v2
For IONOS Enterprise Cloud I’ve also selected a similar spec and have used the Intel Haswell E5-2660 v3 based chip for the OS as this will be very close to the A1_v2 instance in Azure, Like Azure I’ve also included the Windows Server license cost in the subscription along with 24/7 support which is actually free. The monthly cost for this server is £50.96 so comparing costs of using IONOS Enterprise Cloud there would be a saving of £134.88 over the year, a saving is a saving, so on paper the costs look good so far.
IONOS Enterprise Cloud Pricing for A1_v2 equivalent
Now, what about performance tests between the two? First I wanted to see how the external and internal internet connectivity was performing, so no big surprise, IONOS way outperformed Azure by a factor of 3, which is to be expected given the infrastructure back end design running on InfiniBand and the datacentre interconnects.
Next, the focus turned to CPU, RAM and disk performance for this I ran the Novabench performance utility and performed tests on both servers, the tests did throw up some major differences between the two. Let’s take a look at Azure first
Azure A1_v2 Instance Novabench Results
The Azure instance had a low score for its CPU benchmark which makes sense as the CPU is a shared resource with other instances being hosted on that Hyper-V cluster node within the Azure cloud, the RAM score was also low with a throughput of 3929 MB/s, but what was noticeable was that the disk read performance was good with a throughput of 163 MB/s but write speeds were a complete polar opposite.
The IONOS Enterprise cloud eclipsed the metrics of the Azure instance and really showed off the advantage of having dedicated CPU and memory resources for the instance
IONOS Instance Novabench result
The CPU performance was 385% that of the CPU in Azure and for Azure to achieve a similar score an additional 3 CPUs would have to be added to maintain the same CPU score. The RAM speed also was way beyond that of Azure and achieved 19318 MB/s a factor of 3 times faster, the disk read & write performance both outperformed Azure, it did maintain an equal throughput for both write and read speeds with writes outperforming by 18 times that of Azure. Just a note here that I used a standard HDD as the storage medium and could have used an SSD instead which would have increased the performance even more.
Finally, I configured another instance in IONOS Enterprise Cloud using an AMD Opteron 62xx 2.8Ghz processor to see it that could match the Intel-based Azure instance and for much of the benchmark scores it was comparable to the Azure instance, even better the cost of the instance was £31.52 a month giving a saving £368.16 over the year. It should be mentioned that IONOS Enterprise Clouds let you configure cores and storage at will in the most granular way possible: core by core and Gigabyte by Gigabyte.
IONOS AMD Instance Novabench result
For Azure to catch up to similar performance of that of IONOS Enterprise Cloud the Azure instance would need to be reconfigured to a A4_v2 size this is 4 times the resources of the IONOS Instance which would increase the monthly cost to £182.44 which would equate to £2210.64 for the year of which £1599.12 would be for the cost of an equal performance instance of that of the IONOS instance.
Azure A4_v2 Instance Novabench Results
Can you really justify that type of expense of spending an additional £1600 per year for the same performance? IONOS Enterprise Cloud employs KVM based virtualisation making extensive use of hardware virtualisation and maps the CPU power of a real core to a vCPU and provides dedicated memory so it is surely the way to go.