Clarifying what cloud and virtualized data storage really mean

How are you using storage virtualization to lower your storage costs?

Cloud storage doesn’t always mean the public cloud. Virtualization and virtualized data storage aren’t always about virtual servers and desktops. Find out what’s really going on.

Logan G. Harbaugh

IT consultant/freelance reviewer – Independent consultant – SearchStorage

Let’s clear up some misconceptions about storage. First, cloud storage isn’t always hosted on a public service, such as AWS and Microsoft Azure. And second, virtualization and virtualized data storage don’t just refer to virtual servers or desktop systems hosted on VMware ESX or Microsoft Hyper-V. These two misconceptions are related, because one true thing about cloud storage is that it is virtualized.

To a certain extent, all storage is virtualized. Even the most basic block-based hardware system — a single hard disk — is mapped by the storage controller attached to the hard disk. This translates the physical hardware blocks, sectors and tracks on the hard drive’s physical disks into a virtual set of blocks, sectors and tracks that the motherboard and storage controller use to communicate with the physical disk.

Likewise, file-based storage creates an SMB or NFS volume containing files and metadata, even though the underlying file system might be different from the one presented by the storage system. Many file servers use more modern file systems, such as ZFS, instead of SMB or NFS, and then translate. Others use CIFS or NFS and present the volume as both. That way, an SMB volume can be presented as an NFS volume and vice versa. This is also a type of virtualized data storage.

The truth about virtualized data storage

Storage virtualization refers to storage that isn’t directly accessible to the storage consumer. It can be a server, server instance, client system or other system that needs storage. Nearly all storage in the data center and public and private clouds is virtualized.

One true thing about cloud storage is that it is virtualized.

Even iSCSI volumes and Fibre Channel LUNs that appear to be block devices and theoretically identical to an internal hard disk can be considered virtualized. They’re generally RAID volumes, which mean that several physical disks are presented as one or more virtual disks. In addition, software features, such as tiering, snapshots and replication, require a virtualization layer between the physical storage and the consumer. Deduplication, compression and object storage layers add additional layers of virtualized data storage.

Virtualization can be useful. A volume that appears to an application or end user as a single contiguous directory tree may include files hosted on different storage tiers, some on local hard disks and others on low-cost cloud storage tiers. This results in high-performance storage at the lowest possible cost, because virtualized data storage lets files that haven’t been accessed for a while be moved to inexpensive storage.

Cloud options

Cloud storage is often assumed to be storage in the public cloud, like Amazon S3, Google Cloud Platform and Microsoft Azure. However, many vendors offer some form of cloud storage, ranging from backup vendors, such as Barracuda and Zetta; to Oracle, Salesforce and other cloud application vendors; to alternatives to the big three, such as DigitalOcean and Rackspace.

Data center cloud products also make storage easily available to applications, whether or not they’re running locally. Dell EMC, Hewlett Packard Enterprise, Hitachi Vantara and NetApp all offer these capabilities. Some of these products are proprietary, some are single-purpose and some are based on open source standards, such as Ceph.

This was last published in July 2018

Are hyper-converged infrastructure appliances my only HCI option?

Do you prefer to buy preconfigured hyper-converged appliances, deploy software-only HCI or build your own configuration?

Preconfigured hyper-converged appliances aren’t your only option anymore. Software-only and build-your-own hyper-converged infrastructure have unique pros and cons.

Alastair Cooke

SearchVirtualDesktop

Freelance trainer, consultant and blogger specializing in server and desktop virtualization

There are multiple ways to approach a hyper-converged infrastructure deployment, some of which give IT a little more control.

When we talk about building a hyper-converged infrastructure (HCI), the mental image is usually deploying some physical appliances using high-density servers and spending a few minutes with some wizard-driven software. But buying hyper-converged infrastructure appliances is just one way to do it.

As an IT professional, you can also deploy software-only HCI on your own servers. Or you can start from scratch and engineer your own infrastructure using a selection of hardware and software. The further you move away from the appliance model, however, the more you must take responsibility for the engineering of your deployment and problem resolution.

Let’s look more closely at hyper-converged infrastructure appliances and some do-it-yourself alternatives.

Preconfigured hyper-converged appliances

Hyper-converged infrastructure appliances wrap up all their components into a single order of code. The vendor does all of the component selection and the engineering to ensure that everything works together and performs optimally.

Usually, the hyper-converged appliance has its own bootstrap mechanism that deploys and configures the hypervisor and software with minimal input from IT. For many customers, this ease of use is a big reason for deploying HCI, making it possible to largely ignore the virtualization infrastructure and focus instead on the VMs it delivers.

Software-only HCI

One of the big reasons for selecting a software-only hyper-converged infrastructure is that it offers hardware choice. You may have a relationship with a preferred server vendor and need to use its hardware. Or you may simply want an unusual combination of server hardware.

Another example is that you may want a lower cost, single-socket server option, particularly if you are deploying to a lot of remote or branch offices. If you are deploying to retail locations, you may need servers that will fit into a shallow communications cabinet rather than a data center depth rack. Once you select your hardware, you are responsible for the consequences of those choices.

Once you select your hardware, you are responsible for the consequences of those choices. If you choose the wrong network interface card or a Serial-Attached SCSI host bus adapter, you may find support is problematic, or performance may not match your expectations.

HCI from scratch

You can also build your own hyper-converged infrastructure using off-the-shelf software and hardware, a hypervisor and some scale-out, software-defined storage (SDS) in VMs.

As with software-only HCI, you are taking responsibility for this decision and its consequences. You can probably buy support for the hypervisor and the SDS, but what about potential interoperability issues between the layers? What is the service level for resolving performance problems?

Building a platform from scratch instead of buying preconfigured hyper-converged infrastructure appliances is only sensible if you have your own full support team providing 24/7 coverage.

This was last published in March 2018

Do I need converged or hyper-converged infrastructure appliances?

What’s the most important factor when choosing between converged and hyper-converged infrastructure?

Scalability, risk tolerance and cost all factor into the decision between converged and hyper-converged infrastructure. The two technologies have very different use cases.

Alastair Cooke

SearchVirtualDesktop

Freelance trainer, consultant and blogger specializing in server and desktop virtualization.

Converged and hyper-converged infrastructures have similar names, but they take very different approaches and solve different types of problems.

Converged infrastructure (CI) helps remove risk from a large virtualization deployment. Hyper-converged infrastructure (HCI) represents a rethinking of VM delivery, and it aims to simplify operation of a virtualization platform. Either converged or hyper-converged infrastructure appliances can deliver a faster time to value than assembling a virtualization platform from disparate components, but their resulting platforms will have different characteristics.

Converged infrastructure appliances

A converged infrastructure appliance is pre-configured to run a certain number of VMs, and it’s ready to be connected to an existing data center network and power supply from the time it’s built. Vendors build these appliances with components that include a storage array, some servers, network switches and all the required cables and connectors. Vendors assemble and test all of these components before delivering them to customers, and they control every aspect of the build, down to the certified firmware and driver levels for each part.

A small converged infrastructure appliance can take up just half a data center rack, and the largest might be five full racks. Usually, deployment involves professional services from the vendor, and every update requires more professional services. The aim of CI is to take the risk out of deploying a virtualization platform by having the vendor design and support the same platform across multiple customers. It is usually not designed to scale in place; for more capacity, organizations must buy additional complete converged infrastructure appliances.

Hyper-converged infrastructure

Hyper-converged infrastructure appliances are built around a single x86 server, and a group of appliances are configured together as a cluster that organizations can expand and contract by adding or removing appliances. The first consideration when choosing converged or hyper-converged infrastructure is scale.

HCI puts an emphasis on simplified VM management. It usually also includes some sort of backup capability and often a disaster recovery (DR) function. (Many hyper-converged products integrate with the public cloud for backup and DR.)

A significant feature of hyper-converged infrastructure appliances is that in-house IT professionals, rather than vendors’ professional services staff, can complete most functions, from initial deployment to adding nodes to the entire update process.

Converged or hyper-converged?

The first consideration when choosing converged or hyper-converged infrastructure is scale. A half rack of CI appliances will run 100 or more VMs, whereas five racks will run thousands of VMs. CI is not for small offices or small businesses. It’s suited for enterprises.

The second aspect is that CI is about reducing risk, even if that increases cost. All of the professional services that surround CI are areas where the vendor is paid to reduce the customer’s risk. Organizations buy CI for guaranteed outcomes, so they tend to be in risk-averse industries, such as banking, insurance, government and healthcare.

Hyper-converged infrastructure appliances are popular with organizations that do not want to think about the hardware or software underneath their VMs. These organizations want to manage a fleet of VMs with minimal effort because the value is in the applications inside those VMs, rather than the servers or hypervisors on which they run. HCI is ideally suited for scale-out workloads, such as VDI, or for nonproduction uses, such as test and development.

Some hyper-converged infrastructure appliances operate with just one or two nodes at a site. This makes them suitable for remote or branch office deployments, particularly where there are a large number of branches, such as in a retail chain. HCI’s built-in data protection is popular in these scenarios because it reduces the risk of data loss at the branch and, in some cases, allows one branch to provide DR capacity for another.

This was last published in June 2018

Evaluate hyper-converged for high-density data centers

How does your organization deal with increasing levels of density in your IT equipment?

Although rising data center densities don’t seem problematic for hyperscale providers, they create challenges for enterprises. Discover how hyper-converged systems can help.

Clive Longbottom

Independent Commentator and ITC Industry Analyst –ComputerWeekly.com

Modern IT equipment can handle more workloads in a smaller footprint, but this benefit also creates challenges for some enterprises.

A hyperscale cloud provider with full knowledge of its average workload can easily architect a dense compute platform. This is especially true when that average workload is actually millions of different workloads across a massive user base — which is the case for cloud providers like Amazon Web Services (AWS) and Microsoft Azure — or is a predictable set of workloads, such as those that run at Netflix, Facebook or Twitter. A single, logical platform that uses a massive amount of compute, storage and network nodes is fairly easy to create for these providers, since it’s a cookie-cutter approach; when AWS needs to add extra resources, there is very little systems-architecting involved.

However, it is more challenging for an organization with its own dedicated IT platform to support high-density data centers. For example, an organization won’t usually run thousands of servers as a single, logical platform that supports all workloads. Instead, there will most likely be a one-application-per-server or cluster model, virtualized servers that carry one or more workloads and private clouds that carry different workloads with dynamic resource sharing.

Fortunately, hyper-converged infrastructure (HCI) provides a way to better support high-density data centers.

Evaluate HCI — but carefully

HCI vendors engineer server, storage and networking components to work together and offer adequate cooling at the lowest cost, which enables organizations to support high-density data centers in a shorter period of time. However, there are still challenges with power, cooling and workload capabilities.

Power and cooling challenges are fairly easy to address. Standard power distribution systems can support most HCI systems in a data center facility. But if you want to build a platform that supports high-performance computing (HPC), where power densities might exceed existing distribution capabilities, you’ll face concerns. Decide whether expanded power and cooling capabilities in the facility are a worthwhile investment or if a colocation facility can meet these new demands.

It’s more difficult to address complex workload capabilities. If you have applications that are directly applied to a platform, hard partition the resources allocated to them, and like in traditional IT models, carefully plan to allow enough space for peak workloads.

When you work with applications that are applied to VMs, remember that each VM is a self-contained entity that carries a full stack of resource-hungry services. Containers share a lot of the same services as a VM, so allow for a greater number of containers to run on a given platform rather than VMs that carry out the same function.

So, just how many VMs or containers should you run on a given HCI platform? Be wary of figures given out by vendors, as the workloads they use to gather those numbers are often generalized and basic. For example, HCI vendors that sell a system focused on virtual desktop infrastructure might state that upwards of 200 desktops can run on their system. But that might only be true when desktops don’t have more than one OS and when users don’t need to log into them at the same time every day.

Many factors come into play when it’s time to choose an HCI vendor. Here are some important guidelines to keep in mind.

Look for vendors who run HCI systems as a proof of concept, allowing you to put your own workloads onto their platform and apply synthetic loads to gauge how many real-world VMs or containers the platform can take.

If you choose to build your own highly dense platform, employ experienced systems architects who can ensure that the interdependencies among compute, storage and network resources are carefully balanced and work well together. People with such skills are difficult to find, though — another reason why, outside of HPC, HCI is a better bet for high-density data centers than the build-it-yourself approach.

Next Steps

This was last published in September 2017

Security, vendor choices affect server purchases for IT buyers

Outside of cost, what are the biggest factors in your server selection process?

Server selection is a quandary for IT, as security, the use of file servers and whether multiple servers of CPU systems will meet enterprise demand plague enterprises.

Stephen J. Bigelow

Senior Technology Editor

There are many factors to consider in the midst of a server selection. For example, VM and container consolidation, as well as visualization and scientific computing, each affect the decision. In part two of our purchasing guide, we’ll discuss other important factors on server purchases for your enterprise.

Enhanced server security plays a role in server purchases

Although server purchases aren’t based solely on security capabilities, there is a proliferation of protection, detection and recovery features to consider for most enterprise tasks. Modern security features now extend well beyond traditional Trusted Platform Modules.

For example, secure servers can offer protection through a hardware-based root of trust, which uses hardware validation of server management platforms, such as an integrated Dell Remote Access Controller, and server firmware as the system boots. Validation typically includes cryptographic signatures to ensure that only valid firmware and drivers are running on the server. Similarly, firmware and driver updates are usually cryptographically signed to verify their authenticity or source. You can execute validations periodically even though the system might not reboot for months. Native data encryption is increasingly available at the server processor level to protect data in flight and at rest.

An increasing number of systems can detect unauthorized or unexpected changes in system firmware images and firmware configurations, enforcing a system lockdown to prevent such changes and alerting administrators when change attempts occur at the firmware level. Servers frequently include persistent event logging, which includes an indelible record of all activity.

And servers benefit from various recovery capabilities. For example, automatic BIOS/firmware recovery can restore firmware to a known good state after the system detects any flaw or compromise in the firmware code base. Some systems can apply similar restoration to the OS by detecting possible malicious activity and restoring the OS to a known good state as well. And system erasure features can be used to wipe all hardware configuration settings of the server, including BIOS data, diagnostic data, management configuration states, nonvolatile cache and internal SD cards. System erasure can be particularly important before redeploying the server or removing it from service.


When choosing a server, evaluate the importance of certain features based on the use cases.

For data servers, focus on network I/O

File servers, or data servers, can take many shapes and sizes depending on the needs of each specific business. The actual compute resources needed in a data server are typically light. For example, file servers rarely process data or make computations that demand extensive processor or memory capacity. Web servers may include more resources if the system will also be running code or back-end applications, such as databases. If the organization plans to employ virtualization to consolidate multiple data servers onto a single physical box, the processor and memory requirements will need a closer look.

However, the emphasis for data servers is more frequently focused on network I/O, which can be critical for accessing shared/centralized storage resources and exchanging files or web content with many simultaneous users — network bottlenecks are commonplace. If the data server will employ internal storage, the choice of disk types and capacity can have a significant influence on storage access performance and resilience. Data servers can deploy a fast 10 Gigabit Ethernet port or multiple 1 GbE ports, which you can trunk together for more speed and resilience.

As just one example, a modestly configured Dell EMC PowerEdge R430 rack server offers two processor sockets, 16 GB of memory, four 1 GbE ports and a 1 TB 7.2K rpm Serial Advance Technology Attachment (SATA) 6 Gbps disk drive by default. However, you can select the R430 chassis to accept varied disk configurations with up to 10 hot-pluggable Serial-Attached SCSI, SATA, nearline SAS or solid-state drives if the business chooses to place storage in the server itself. You can also enhance network performance through a choice of Peripheral Component Interconnect Express network adapters or storage host bus adapters.

Systems versus CPUs

Many data centers are shrinking as virtualization, fast networking and other technologies allow fewer servers to host more workloads. The quandary for server purchases then becomes server count versus CPU count. Is it better to have more servers or more resources within fewer servers? Packing more capability into fewer boxes can reduce overall capital expenses, data center floor space and power and cooling demands. But hosting more workloads on fewer boxes can also increase risk to the business because more workloads are affected if the server fails or requires routine maintenance. Clustering, snapshot restoration and other techniques can help to guard against hardware failures, but a business still needs to establish a comfortable balance between server count and server capability, regardless of how the servers are used.

What server features do you look for?

This is the second article of a two-part series on server selection.

The first article discusses features such as container consolidation, virtualization and security.

Next Steps

This was last published in November 2017

Pin down these hardware service contract details

What are some tips you can share for a solid hardware support agreement?

A server warranty won’t do much good when every second of downtime counts. Here’s how to hammer out a support agreement that addresses the particular needs of your company.

Brien Posey

Microsoft MVP – SearchDataBackup

The process of purchasing a server is relatively straightforward, but working out the details of a hardware service contract tends to require significantly more effort.

The need for a support contract is often overlooked because many in IT assume the hardware warranty protects the company if any problems occur. Although a warranty offers some assurances, it is often inadequate on its own.

For example, suppose a server’s system board fails, but it is covered under warranty. Each vendor has its own way of handling this type of issue. Typically, the administrator would need to ship the system board to the vendor before it sends a replacement. In contrast, a support contract can provide same-day service for the replacement and professional installation by a certified technician.

Map out the company’s needs

Prior to negotiating a hardware service contract, consider what matters most to the organization. Why obtain a support agreement in the first place? Does the organization require immediate access to hardware parts during a critical outage? Does the IT staff lack the technical skills to handle hardware-level repairs? Make sure that any service-level agreements the organization must adhere to are part of the equation.

Keep these factors in mind during discussions with a support vendor, and make sure you address three key areas in a hardware service contract.

Pin down terms to avoid a lengthy outage

First, negotiate the response time. When a critical issue hits the data center, there should be no doubt about the availability of the support vendor.

Most rapid response support contracts are expensive because they might require the provider to hire extra staff members. One way to reduce this cost is to negotiate a two-tier response time. For example, the contract might require the provider to respond within 48 hours for noncritical outages, but also to have a tech on site within an hour for any outages the organization deems critical. Prior to negotiating a hardware service contract, consider what matters most to the organization.

Second, lock down the availability of replacement hardware. It’s pointless to have a contract that requires the provider to respond to a critical outage within an hour if it takes the needed parts a week to arrive.

At one time, organizations relied almost exclusively on physical servers, and the server’s operating system was tied to its specific hardware configuration. Backups could not restore to dissimilar hardware. To account for this, most service agreements required providers to have exact duplicates of the organization’s hardware so it could swap out an entire server if necessary.

Server virtualization makes this less of an issue, but the provider’s inventory remains an important consideration. During an outage, an organization needs to get back online as quickly as possible. As such, a good contract for hardware service should require the support vendor to maintain an inventory of spare parts that match your server hardware. It is also a good idea to make sure the agreement provides loaner servers if the service vendor does not have the required parts immediately available.

The hardware support agreement should address the quantity of repair parts the vendor needs to keep in stock. Multiple servers can break at the same time. The support contract should eliminate any chance a cascading failure would leave the company vulnerable.

Third, consider adding warranty handling to the hardware service contract. This is less critical than the other items, but it is worth considering. Because some of the hardware is covered under warranty, ideally, the support provider should handle the warranty claims.

If a system board fails, then the service vendor should replace that system board with a spare, file a warranty claim and ship the failed part to the manufacturer. This offers the dual benefit of a quick system recovery and frees the IT department from dealing with warranties.

This was last published in April 2018

Buy server hardware with these key functions in mind

What are the most important capabilities that servers provide for your data center?

Find the right server hardware for your data center by evaluating what leading vendors offer in terms of processors, memory, storage, connectivity, hot swapping and security features.

Robert Sheldon

Contributor – SearchSQLServer

When the time comes to buy server hardware, there are a lot of factors to consider, such as the number of processors, the available memory and the total storage capacity. Buyers should closely evaluate eight important features when comparing the servers available from the leading vendors.

These eight features cover the basic components to look for to buy server hardware, but they don’t represent all the features that buyers should consider. Decision-makers at every organization must determine exactly what they need to support their existing and future workloads, keeping in mind the differences between rack, blade and mainframe computers.

Companies should view these eight features as the starting point to identify their requirements and evaluate the available products and should expand their research as necessary to ensure they’re addressing every concern.

Processors

One of the most important components to consider when buying server hardware is the processor that carries out the data computations. Also referred to the central processing unit (CPU), the processor does all the heavy lifting when it comes to running programs and sifting through data. Most servers run multiple processors, usually with one per socket. However, a processor can also be made up of multiple cores to support multiprocessing capabilities.

Multiple cores usually translate to better performance, but the number of cores is not the only factor to consider. Buyers should also consider the processor speed — CPU clock speed — and available cache, as well as the total number of sockets, as these can differ significantly from one processor to the next.

For example, the NEC Express5800/D120h blade server supports up to two processors from the Intel Xeon Scalable product family. One of the most robust of these processors offers 26 cores, 35.75 MB of cache and a 2.0 GHz clock speed.

Compare that to the Dell PowerEdge M830 blade server, which uses Xeon E5-4600 v4 processors. The most robust of these offers 22 cores, 55 MB of cache and a 2.20 GHz clock speed. The Dell server also supports up to four processors rather than two.

Editor’s note

With extensive research into the server market, TechTarget editors have focused this series of articles on server vendors with considerable market presence and that offer at least one product among blade, rack and mainframe types. Our research included Gartner, Forrester and TechTarget surveys.

Memory

Adequate server memory is essential to a high-performing system, and the more memory that is available, the better the workloads are likely to perform. However, other factors can also contribute to performance, such as the memory’s speed and quality. Most server memory is made up of dual in-line memory module integrated circuit boards with some type of random-access memory. Companies should view these eight features as the starting point to identify their requirements and evaluate the available products and should expand their research as necessary to ensure they’re addressing every concern.

Server memory might also include fault-tolerant capabilities or other features that enhance reliability. One of the most common capabilities is error-correcting code (ECC), a method to detect and correct common single-bit errors. When evaluating server hardware memory, you should look at the entire offering, keeping in mind the types of workloads and applications you run.

For example, Fujitsu’s mainframe computers in the BS2000 SE series support up to 1.5 TB of memory. However, IBM’s ZR1 mainframe, which is part of the z14 family, supports up to 8 TB of memory. The ZR1 also provides up to 8 TB of available redundant array of independent memory to improve transaction response times, a pre-emptive dynamic RAM feature to isolate and recover from failures quickly, and ECC technologies to detect and correct bit errors.

Storage

Servers vary greatly in the amount and types of internal storage that they support, in part because workflows and applications also vary. For example, a server hosting a relational database management system will have different requirements than one hosting a web application. In addition, the use of external storage, such as storage area networks (SANs), can also impact internal storage requirements.

When you buy server hardware, be sure to evaluate each prospective server to ensure it can meet your storage needs. Today, most servers support both solid-state drives (SSDs) and hard disk drives (HDDs). But buyers should certainly verify this support, as well as the server’s supported drive technologies, such as Serial-Attached SCSI (SAS), Serial Advanced Technology Attachment (SATA) or non-volatile memory express (NVMe). Other considerations should include drive speeds, capacities, endurance and support for redundant array of independent disks (RAID).

For example, Oracle’s X7-2 rack server can support up to eight 2.5-inch HDDs or SSDs, either SAS or NVMe, and multiple RAID configurations. Compare that to the Inspur TS860G3 rack server, which can handle up to 16 drives, either SSDs or HDDs, and support both SAS and SATA. However, the Inspur server does not support NVMe, which means the SSDs might not perform as well.

Connectivity

A server’s ability to connect to networks, peripherals, storage and other components is essential to its effectiveness within the data center. The server needs the necessary connectors and drivers to ensure that it can properly communicate with other entities and process various workloads. Buyers need to determine exactly what type of connectivity is necessary and, from there, examine the server’s specs to verify whether it will meet those requirements.

Servers differ widely in this regard, so buyers should look for specifics such as the number and speed of the Ethernet connectors, the number and type of USB ports, the availability of management interfaces, the types of protocols available, support for SANs and other storage systems, as well as whatever other components are necessary to facilitate connectivity.

Acer’s rack server Altos R380 F3 is a good example of what connectivity features to look for when you buy server hardware. It includes two Ethernet ports, either 1 GB or 10 GB, an RJ-45 management port, three USB 3.0 ports, one USB 2.0 port, and a video port. In addition, the server offers up to seven Peripheral Component Interconnect Express (PCIe) 3.0 slots and one PCIe 1.0 slot.

Hot swapping

Servers offer hot swapping capabilities to varying degrees. Hot swapping refers to the ability to replace or add a component without needing to shut down the system.

The term hot plugging sometimes refers to hot swapping, although, in theory, hot plugging capabilities are limited to being able to add components but not replace them without shutting down the system. Because of the confusion around these terms, it is best to verify how each vendor uses them.

One of the most common hot swappable components is the disk drive. For example, the Cisco UCS B480 M5 blade server supports hot swappable drives, as does the Huawei FusionServer CH242 V5 blade server and the Intel R2224WFQZS rack server.

With blade systems, the hot swapping capabilities are often within the chassis itself. One example is the chassis used for the Lenovo ThinkSystem SN850 blade server, which provides hot swapping capabilities for the fans and power supplies, in addition to the server’s disk drives. However, these types of capabilities are not limited to blade servers. The Acer Altos R380 F3 system also supports hot swappable fans and power supplies even though it is a rack server.

Redundancy

Redundancy is important to ensure a server’s continued operation in the event of a component failure. Most servers provide some level of redundancy, often for the hard drives, power supplies and fans. The Asus RS720-E9-RS12-E rack server, for example, offers redundant power supplies and the HPE ProLiant DL380 Gen10 rack server offers redundant fans.

As with its hot swapping capabilities, the redundancy available to blade servers is often located within the chassis. For instance, the chassis that support the Dell PowerEdge M830 blade server and Supermicro SBI-6129P-T3N blade server both provide redundant power supplies.

However, the Dell chassis also offers redundant cooling components, and the server itself provides redundant embedded hypervisors.

Manageability

Admins must manage a server effectively to ensure its continued operation while delivering optimal performance. Most servers provide at least some management capabilities.

For example, many servers support the Intelligent Platform Management Interface (IPMI), a specification developed by Dell, Hewlett Packard, Intel and NEC to monitor and manage server systems. Not surprisingly, the servers offered by these companies, such as the Dell PowerEdge M830, HPE ProLiant DL380 Gen10, Intel Server System R2224WFQZS and NEC Express5800/B120g-h, are IPMI-compliant.

But servers are certainly not limited to IPMI capabilities. For example, the Acer Altos R380 F3 rack server comes with the Acer Smart Server Manager; the Asus RS720-E9-RS12-E rack server comes with the ASUS Control Center; and the Cisco Unified Computing System (UCS) B480 M5 blade server comes with Cisco Intersight, Cisco UCS Manager, Cisco UCS Central Software, Cisco UCS Director and Cisco UCS Performance Manager.

Blade systems usually provide some type of module to manage the individual blades. For instance, Huawei’s FusionServer CH242 V5 blade system includes the Intelligent Baseboard Management System module to monitor the compute node’s operating status and support remote management.

Not surprisingly, systems such as Fujitsu’s BS2000 mainframes provide a variety of management capabilities. For example, each BS2000 system includes a management unit that works in conjunction with the SE Manager to offer a centralized interface from which to administer the entire server environment. And IBM’s ZR1 mainframe includes the IBM Hardware Management Console (HMC) 2.14, the IBM Dynamic Partition Manager and an optimized z/OS platform for IBM Open Data Analytics.

Security

Another important factor to consider is the server’s security features. As with other features, servers can vary significantly in what they offer, with each vendor taking a different approach to securing their systems.

For example, the Lenovo ThinkSystem SN850 blade server provides an integrated Trusted Platform Module 2.0 chip to store the RSA encryption keys used for hardware authentication. The server also supports Secure Boot, Intel Execute Disable Bit (EDB) functionality and Intel Trusted Execution Technology.

Another example is the Oracle Server X7-2 rack server, which comes with the Oracle Integrated Lights Out Manager 4.x, a cloud-ready service processor for monitoring and managing system and chassis functions. On the other hand, the Huawei FusionServer CH242 V5 blade server supports the Advanced Encryption Standard — New Instructions, as well as Intel’s EDB feature and Trusted Execution Technology.

IBM’s ZR1 mainframe is also strong when it comes to security. The server includes on-chip cryptographic coprocessors and the Central Processor Assist for Cryptographic Function (CPACF), which includes the new Crypto Express6S feature to enable pervasive encryption and support a secure cloud strategy. The CPACF is standard on every core. The platform also includes IBM Secure Service Containers to securely deploy container-based applications.

This was last published in January 2019

The top 5 data storage questions we answered in 2018

What were your top storage questions from 2018?

As storage continues to evolve, many readers still are curious about the basics. In 2018, common data storage concerns included RAID, capacity and different storage architectures.

Erin Sullivan

Site Editor

The field of data storage is massive and continues to grow as the years go by. Naturally, this leads to a large number of questions that need to be answered. Our Ask the Expert articles enlist top experts and analysts to tackle common data storage questions readers have about different storage technologies and developments.

While storage technology has grown more complex, reader interest often points toward basic concerns. In 2018, expert answers covering RAID levels, storage infrastructure comparisons and units of capacity measurement garnered the most interest. Of course, while these data storage questions lean toward the basics, they have clearly been affected by the changes in the storage market.

Below, we’ve compiled the top five most-read Ask the Expert articles of 2018. Symbolic of the ever-evolving, but consistent, storage market, all five pieces have been updated to reflect modern changes to evergreen technologies.

Memory and storage: What’s the difference?

Memory and storage, while connected, aren’t synonymous. Both refer to internal data storage space on a computer, and the major difference between the two lies in the state of the data when the storage system is off. However, as storage technology evolves, the line between memory and storage is beginning to blur. It’s no wonder that this was one of the most prominent data storage questions readers were searching for this year.

RAID levels explained

While RAID is a storage standby, it continues to grow and change as storage requirements do. The benefits of RAID include improved performance and higher availability, along with relatively low costs. RAID today is broken into three separate categories: standard, nonstandard and nested. Numbered 0 to 6, standard RAID levels represent the original basic RAID, while nonstandard levels and nested RAID cover RAID levels set for particular open source projects and combinations of RAID levels.

Looking for a refresher on the state of RAID levels? You’re not alone. In this expert answer, we break down RAID levels and benefits and how they’re used today.

Terabytes, petabytes, exabytes and more

Long gone are the days when a 1 TB drive was an unimaginably large amount of storage space.

It only makes sense that, as storage capacities grow, units of measurement must grow as well. Long gone are the days when a 1 TB drive was an unimaginably large amount of storage space. “What is bigger than a terabyte?” is no longer a theoretical storage question, and while these large amounts of data may yet have practical commercial uses, it won’t be long before they are put into wider use.

What is bigger than a terabyte? Well, there’s a petabyte (1,024 TB), an exabyte (1,048,576 TB), a zettabyte (1,073,741,824 TB) and more. You get the gist.

NFS vs. CIFS/SMB

Both NFS and CIFS/SMB were designed to work with any OS, but in Linux and SMB in Windows, NFS reigns supreme. Once a heated debate, NFS and CIFS/SMB have taken on similar characteristics over time and are supported by most enterprise storage systems. Perhaps it’s because of these similarities that “What is the difference between NFS and CIFS/SMB?” was one of the more prominent data storage questions asked in 2018.

Regardless, readers will now find that, despite their lengthy history of facing off, the two protocols are now more similar than they’ve ever been.

Network storage showdown: SAN vs. NAS

SAN and NAS are staples in the field, so what data storage questions could you possibly have about them? Well, while the technologies are established, the nuances and uses continue to change with the world around them. The differences and benefits of SAN and NAS aren’t the same as they used to be and will continue to change. In this expert answer, we explore how the two architectures compare, the advantages and disadvantages of each, and where they’re headed in the future.

This was last published in December 2018

Global server revenue hits record quarterly high as hyperscale demand for datacentre kit soars

Latest worldwide server market tracker from analyst house IDC suggests hyperscale demand for datacentre hardware has fuelled record quarterly revenue growth for the sector

Caroline Donnelly

Datacentre Editor

14 Dec 2018 12:00

The revenue generated by global server shipments hit a record quarterly high during the third quarter of 2018, as the demand for datacentre kit from the cloud giants continues to soar.

According to IDC’s Worldwide Quarterly Server Tracker, server shipment revenue hit $23.4bn during the third quarter, which is the highest quarterly total on record.

Revenue was up 37.7% year-on-year overall, marking the third quarter out as the fifth consecutive quarter of double-digit growth the worldwide server market has enjoyed in recent times. Meanwhile, shipment volumes were up 18.3% on the previous year and totalled 3.2 million units.

Sebastian Lagana, research manager of infrastructure platforms and technologies at IDC, said the figures are indicative of how high the demand is for datacentre hardware from the cloud and internet giants at present.

“The worldwide server market once again generated strong revenue and unit shipment growth due to an ongoing enterprise refresh cycle and continued demand from cloud service providers,” said Lagana.

“Enterprise infrastructure requirements from resource intensive next-generation applications support increasingly rich configurations, ensuring average selling prices [ASPs] remain elevated against the year-ago quarter. At the same time, hyperscalers continue to upgrade and expand their datacentre capabilities.”

The market’s record quarter appears to have been primarily driven by the growth in volume server shipments, as the revenue generated by this category of hardware was up 40.2% on the previous year and hit $20bn.

IDC has directly attributed the surge in demand for volume units to the datacentre refresh and build-out activities of the hyperscale cloud and internet service provider community in previous quarters.

The main beneficiaries of this trend tend to be the original design manufacturers (ODM), who saw their share of the server market creep up by 2.5% to 26.8% from third quarter 2017. This group of suppliers also grew their collectively revenue by 51.9% during the past 12 months.

While the ODM community collectively holds the biggest share of the server market, Dell is name-checked by IDC as the server market leader with 17.5% revenue share, and quarterly revenue of $4.09bn, which is up 33.3% on the previous year.

Read more about datacentre trends

Nutanix

Nutanix is a hyper-converged infrastructure pioneer that markets its technology as a building block for private clouds.

…continue reading…

Contributor(s): Erin Sullivan and Sarah Wilson

 

The company was founded in 2009 by Dheeraj Pandey, Mohit Aron and Ajeet Singh, and it is based in San Jose, Calif.

Nutanix appliances converge storage, compute and virtualization into one box. Initially targeting VMware customers, Nutanix branched out after VMware released its own Virtual SAN hyper-converged platform. The vendor’s products now support Microsoft Hyper-V and KVM hypervisors, as well as VMware vSphere and Nutanix’s own KVM-based Acropolis hypervisor (AHV).

Nutanix branded appliances consist of the vendor’s software stack packaged on Super Micro servers. Original equipment manufacturer (OEM) partners Dell and Lenovo rebrand Nutanix software on their x86 servers, and Nutanix channel partners package the vendor’s software on Cisco and Hewlett Packard Enterprise (HPE) servers. IBM also has an OEM deal with Nutanix to sell its software on Power servers.

Company history

Nutanix came out of stealth in 2011 with Complete Cluster, one of the first hyper-converged storage products on the market. In June 2013, Complete Cluster was rebranded as the Virtual Computing Platform, and two new configurations with smaller and larger capacities were added to the line.

In its first two years, company revenue surpassed $100 million. In June 2014, Nutanix entered into an OEM deal with Dell that allowed Nutanix software to be sold on Dell PowerEdge servers.

In December 2015, Nutanix filed for an initial public offering, reporting revenue of $241.1 million for the year. Though the IPO took nine months to complete, Nutanix revenue grew 125% to $166.8 million in its first quarter as a public company. However, losses were also high at $162.2 million.


This video from Nutanix explains the company’s approach to the enterprise cloud.

 

In 2017, Nutanix leadership focused its business model on the company’s software. While the vendor will still sell appliances, it intends to continue selling software on any vendor’s x86 hardware and to count revenue only from its software business.

In June 2017, Nutanix announced a partnership with Google that enables customers to deploy and manage workloads across Google Cloud Platform and its in-house hyper-converged infrastructure (HCI) through a single interface.

Major products and their important features

Despite a push to focus primarily on being a software company, Nutanix products include both software and turnkey appliances. Here are some of the vendor’s major products and services:

  • The Nutanix Virtual Computing Platform ships with VMware ESX or Red Hat KVM and includes hard disk drives and solid-state drives. It also provides storage features, such as tiering, compression and deduplication.
  • Nutanix Enterprise Cloud converges server, storage, virtualization and networking into one software-defined platform. The Enterprise Cloud Platform is scalable, and it’s available as a turnkey appliance or a software-only platform.
  • Nutanix Calm, part of the Nutanix Enterprise Cloud Platform, handles application automation and lifecycle management across public and private clouds.
  • Xi Cloud Services are an extension of the Nutanix Enterprise Cloud Platform. Xi Cloud Services deliver a public cloud environment that can be automatically configured and provisioned. The Xi Disaster Recovery Service enables centralized DR management, one-click failover and nondisruptive disaster recovery (DR) testing.
  • Nutanix X-Ray is the vendor’s automated testing and benchmarking tool. X-Ray can simulate the effects of system failures, software upgrades and other common scenarios to enable better DR planning.
  • Nutanix Acropolis software includes its hypervisor, file and block storage services, data protection, and network and security features. Acropolis is available in Starter, Pro and Ultimate editions, which vary based on the size of the deployment and the number of workloads.
  • Nutanix Prism software uses machine learning technology to help manage, monitor and analyze the vendor’s hyper-converged infrastructure.

Main competitors

Although Nutanix was the first hyper-converged vendor to make a big splash in the market, it is far from alone now.

Dell EMC not only sells Nutanix software in its XC Series on PowerEdge servers, but it competes with Nutanix with a vSAN-based VxRail appliance from VMware — which it owns. In mid-2017, Dell EMC passed Nutanix as the hyper-converged market leader, according to research firm IDC.

 

HPE and Cisco made acquisitions in 2017 to bolster their HCI products. HPE acquired early Nutanix competitor SimpliVity for $650 million, and it now sells SimpliVity software on ProLiant servers. While Nutanix software supports HPE servers, HPE has no relationship with Nutanix, and it recommends its own SimpliVity software on ProLiant servers.

Storage vendor NetApp entered the HCI market in 2017, using its SolidFire Element OS as the basis of the all-flash NetApp HCI product. NetApp HCI is technically considered a disaggregated, software-defined architecture, but it can be deployed for use cases similar to HCI and uses the same high-density unit appliance as Nutanix and other HCI vendors.

Nutanix’s Google partnership puts it in competition with Amazon Web Services, which has partnered with VMware to target an enterprise market with on-premises workloads. Lenovo, which partners with Nutanix and other HCI software vendors, also sells hyper-converged appliances.

Other HCI competitors include smaller companies, such as Maxta, Pivot3 and Scale Computing.

This resource was last updated in January 2018

Continue Reading About Nutanix

Registrations
No Registration form is selected.
(Click on the star on form card to select)
Please login to view this page.
Please login to view this page.
Please login to view this page.
Copied!