Latest Stories

Featured Stories

Provider Terms Explained

July 13, 2018

Introduction

When shopping around for a new server/VPS, you often come across various terms that talk about peering, RAID arrays, Tier n datacenter and so on. Whilst you may know some of these terms, a newcomer may be baffled. Let me demystify common terms in our latest post.

Datacenter

Let’s start with the datacenter where the provider is located. Let’s start with the most common word associated with a datacenter – Tier

Tier

Tier 1-4 levels are assigned by Uptime Institute based on their Tier Classification system. Four tiers are defined based on criteria such as power, cooling, fault tolerance and maintenance. Higher tiers include all features of lower tiers and then some.

Tier 1 & 2 are usually setup to quickly get to launch (product/application) and are designed with an eye on cost. Companies that use datacenters classified in the lower tiers usually are not reliant on 100% availability for their revenue stream. The lower tier DCs have backup power, with Tier 2 have cooling requirements in addition to power backups.

Tier 3 & 4 datacenters on the other hand have better fail-safe measures and allow for redundancy when failures occur on some points. Tier 3 datacenters are concurrently maintainable, i.e. portions of the data center can be brought offline to maintenance without impacting operations. Tier 3 datacenters offer N+1 availability in terms of power. In a nutshell, N+1 availability indicates the presence of a UPS backup attached to the server in additional to overall power backups (Onsite diesel generator etc.) Tier 3 datacenters are still susceptible to operational errors or spontaneous failures of infrastructure components.

Tier 4 datacenters are completely fault tolerant from individual equipment failures or distribution path interruptions. Tier 4 is usually very expensive to setup as it involves replicating every component in the data center. If cost is a factor, a Tier 3 datacenter would suffice for most business needs.

Peering & Transit

As no single organization can cover the world around, it is critical for networks to get interconnected via peering or transit. Whilst peering is more informal and does not involve monetary exchange (Network A users would like to connect to servers on Network B and vice versa, so it makes for an even trade to peer between Network A & Network B).

Transit on the other hand is often a contract between two parties where one purchases the use of the other’s network. The term “transit” is used as the network is being used as a transit route to access other parts of the internet. Transit involves SLAs, thereby better guarantees of uptime as compared to peering. Transiting via a network (for e.g., Network A) allows the provider to access all of the networks peered or having a transit agreement with Network A.

When on the lookout for your next server/VPS, look for the networks your provider connects to. Having optimized connectivity to an Internet Exchange Point (IXP) in the country where your end users are, will be a huge plus point in terms of latency. Your end users will face quicker response times as compared to hosting in a different datacenter that takes more hops to get to.

Nodes

Drilling down to actual server & node specifications. The following terms are relevant and require your attention

Virtualization

Virtualization is the process of splitting a single physical server into smaller virtual servers. Each virtual environment is self-contained and is accessible only by the account holder. Setting up virtual servers lets you host multiple operating systems, or provision each for different purposes eliminating the need to purchase separate physical servers. There are two virtualization options that you most often see – KVM and OVZ. Let’s start with KVM

KVM or Kernel based Virtual Machine servers provide full hardware virtualization. Each KVM node can run its own kernel thereby allowing you to run other OSes such as Windows and BSD. Due to this, you have fixed minimum and maximum values for each resource. KVM offers more isolation as everything is running within your node, you run your own kernel, own OS and have guaranteed resources. KVM supports Docker out of the box and full disk encryption is supported. As of this writing, KVM is as close to getting a dedicated server without actually getting a dedicated physical server.

On the other hand with OpenVZ (OVZ), the base Linux Kernel is split into partitions. Each partition, called a container can be used to host an OS with a pre-determined set of resources. OpenVZ containers are sometimes marketed with “dedicated” and “burst” resources. “Burst” referring to resources that can be borrowed from the overall pool of resources. You may need an extra burst of CPU to compile your code, which can be provided if the server has some to spare.

From a provider standpoint, OVZ uses resources more efficiently and using a shared kernel means higher performance. Typically, OVZ VPSes are cheaper than KVM (other resource considerations being equal). However, OVZ containers can be oversold leading to end user performance degradation. OpenVZ can only run Linux OS as the underlying Linux Kernel is used across all containers. 

RAID

As we have stressed before, data backup is one of the most critical components. While RAID is not backup, it is a way to recover from failures. RAID or Redundant Array of Independent Drives/Disks. Common RAID levels are

RAID 0 – Uses a method called striping, where data is stored partially between two disks. If one disk fails, all data is lost. However, RAID 0 results in better write speeds.

RAID 1 – Uses mirroring to store the information in two disk drives. Data to be written is stored on both disks, which means that the size of the array is as big as the smallest disk in the array. Read performance is better than single disk read, but writes take longer as it needs to be done on two drives.

RAID 5 – Array of disks that provide failure protection by storing parity information across drives. This means that in the event a single disk fails, the lost data can be rebuilt from the other drives. RAID 5 configuration requires atleast 3 disks.

RAID 10 – This is a nested RAID 1 + RAID 0 disk array requiring atleast 4 disks. This is usually the most popular RAID setup and is used in database, email and web servers. Data stored is striped (RAID 0) and each part is written into two disks (RAID 1). There is higher fault tolerance here as it is possible for two disks to fail (one from each group being striped) and still be able to recover data.

Reference

Tier Classification System - https://uptimeinstitute.com/tiers

Introduction to RAID levels - http://www.linux-mag.com/id/7928/


Ramesh Vishveshwar
Ramesh Vishveshwar

Ramesh Vishveshwar is a tech blogger who is always on the lookout for the next big thing. Having discovered his infatuation for various flavors of Linux, he spends his time tinkering with VPS nodes installing and trying out new applications. His interest in coding spans across multiple languages from PHP, shell scripting to remote old generation languages such as COBOL.

Subscribe Email

Recent Tweets

Categories