Data center managers avoid cloud migration risks

Many corporate IT users are flocking to the cloud, but a majority surprisingly remain reluctant to migrate their on-premises mission-critical workloads to a public cloud.

Ed Scannell

Senior Executive Editor – TechTarget – SearchWindowsServer

07 Jun 2019

Well into the cloud era, a significant number of enterprises still have trepidations about moving mission-critical applications and services to the public cloud, preferring to forgo cloud migration risks by keeping apps ensconced within their own data centers.

Heading the list of reservations corporate IT shops have is the lack of visibility, transparency and accountability of public cloud services, according to respondents to the 2019 Uptime Institute’s Annual Global Data Center Survey.

Some 52% of the nearly 1,100 respondents, which included IT managers, owners and operators of data centers, suppliers, designers and consultants, said they do not place their mission-critical workloads in public clouds nor do they have plans to, while 14% said they have placed such workloads in the public cloud and are quite happy with their respective cloud services.

Of the remaining 34%, 12% have placed their services in the public cloud but complain about the lack of visibility. The remaining 22% said they will keep their most important workloads on premises but will consider moving to the cloud if they have adequate visibility.

Cloud migration risks tip the balance

Chris Brown, Uptime Institute’s chief technology officer, said he was a bit surprised that 52% of respondents were reluctant to venture into the public cloud, but a closer look at some of the reasons for that reluctance brought a better understanding.

“Among that 52% (of respondents), there are workloads that just aren’t tailored or good fits for the cloud,” Brown said. “Also, there is a fair number of older applications that have technical issues with adapting to the cloud and there is a lot of rearchitecting associated it with it, or they don’t the budget for it,” he said.

There are some workloads that shouldn’t go to the cloud. But to have these legacy platforms and the associated RDBs sitting around collecting dust just to support a handful of aging apps doesn’t seem to work.

Dana Gardner

Principal analyst, Interarbor Solutions

Of the 34% who have gone to the public cloud or are considering it, it comes down to a matter of trust, according to Brown. For the most part, respondents in this group realize the benefits cloud can bring, but they have difficulty summoning up enough faith that service providers will live up to the uptime promised in their service-level agreements (SLAs).

These concerns over cloud migration risks appear justified. The number of data center outages this year matched last year’s number for the same period of time; although, this year, more managers reported that outages rippled across multiple data centers. Just over a third of respondents reported that outages, which typically were traced to an infrastructure problem, had a measurable impact on their business. About 10% said their most recent outages resulted in over $1 million in direct and indirect costs.

Brown added that part of the problem is many users don’t understand enough about how the cloud is structured or how their cloud availability zones are designed.

“If users see the cloud as just a black box in the sky, they can only trust their provider to give them what they need when they need it,” Brown said. “And if they have outages, they have to hope their SLAs will make them whole.”

While there is plenty of data available showing how reliable most cloud service providers are, users read about highly publicized outages that have occurred over the past few years from providers such as AWS, Google and Microsoft. Compounding that issue is the basic conservative nature of data center managers.

“From my experience, the data center industry always ventures into something very gingerly,” Brown said.

Yet another reason that holds some users back is the fear of cloud lock-in and its associated expense when they want to switch service providers.

“Everyone deals with a lot of data because storage is so cheap and every IT strategy seems to be based around data,” Brown said. “But when it comes time to pull your data out of the cloud, it can cost you a fortune.”

Cloud vendors meet hesitant users halfway

Some analysts and consultants aren’t surprised at the number of corporate users still skittish about cloud migration risks. One analyst points to “cloud-down” moves from the likes of AWS, Microsoft and Google over the past year or two that offer users the option to run their applications either in the cloud or on premises.

“AWS announced Outposts last year because they want to get more into larger enterprises,” said Judith Hurwitz, president of Hurwitz and Associates, an analyst firm in Needham, Mass. “These accounts say to AWS, ‘We like your offerings, but we really want to keep them behind the firewall.’ This is how products like Outposts, [Google’s] Anthos and [Microsoft’s] AzureStack came to be,” she said.


Uptime Institute survey takers are justifiably concerned about cloud migration risks. A report from Enterprise Strategy Group shows that 41% of companies have had to move a workload back out of the cloud, incurring downtime and costs.

While some other analysts understand the reluctance of many data centers to move to the cloud, they also believe it makes sense for them to be bolder and take advantage of the benefits the cloud offers now rather than wait.

“There are some workloads that shouldn’t go to the cloud,” said Dana Gardner, principal analyst with Interarbor Solutions LLC in Gilford, N.H. “But to have these legacy platforms and the associated RDBs (relational databases) sitting around collecting dust just to support a handful of aging apps doesn’t seem to work.”

Capacity demand in the enterprise continues to grow, according to the survey, along with cloud and co-location data centers, with workloads running across a range of platforms. While data center capacity is growing, it is decreasing as a percentage of the total capacity needed.

Dig Deeper on IT infrastructure management and planning

Rollout of 16 TB HDDs targets hyperscale data centers

The drive to 16 TB HDDs is underway. Seagate kicked it off with three new hard disk drives, and Toshiba and Western Digital are poised to follow in 2019.

Carol Sliwa

Senior News Writer – TechTarget – SearchStorage

07 Jun 2019

The high-capacity point for hard disk drives officially hit 16 TB this week with Seagate Technology’s product launch that targets hyperscale, cloud and NAS customers with rapidly expanding storage requirements.

Seagate brought out a helium-sealed, 7,200 rpm Exos 16 TB HDD for hyperscale data centers and IronWolf and IronWolf Pro 16 TB HDDs for high-capacity NAS use cases in SMBs.

Earlier this year, Toshiba forecasted its 7,200 rpm helium-based MG08 Series 16 TB HDD would become available midyear, although the company has yet to confirm a ship date. Western Digital is expected to ship 16 TB HDDs in 2019 based on conventional magnetic recording (CMR) technology.

Lowering total cost

With SSDs taking over performance use cases, HDDs are largely deployed in systems focused on capacity. Using the highest available capacity is especially important to cloud and enterprise customers with explosively growing volumes of data, as they try to minimize their storage footprint and lower costs. Helium-sealed HDDs help because they enable manufacturers to use thinner platters to pack in more data per HDD and require less power than air-filled drives.

“Time to market is extremely critical given that customers — including hyperscale/cloud customers — have limited resources available to qualify new HDD products,” John Rydning, a research vice president at IDC, noted via email.

Rydning said hyperscale/cloud customers would be first to use the 16 TB HDDs because they have the architecture and software stack to deploy them without diminishing overall system performance. The highest capacity HDDs have lower IOPS per terabyte, he noted.

Sinan Sahin, a principal product manager at Seagate, said the vendor has shipped more than 20,000 test units of its 3.5-inch 16 TB HDDs to hyperscale customers such as Tencent and Google and NAS vendors such as QNAP Systems and Synology.

Toshiba began shipping 16 TB HDDs to customers for qualification slightly after Seagate, and Western Digital has yet to do so, according to Rydning, who tracks the HDD market.

“Cloud customers generally will migrate to the highest available capacity, especially if there is a two- to three-quarter gap before the next capacity is qualified and ramped up in volume,” John Chen, a vice president at Trendfocus, wrote in an email.

Horse race for shift to 16 TB

Chen expects 14 TB CMR HDDs to ramp up in volume in the second half of this year at hyperscale companies. “And it is essentially a horse race between the three suppliers to determine if the transition to 16 TB can be pulled in earlier than the second quarter of 2020,” he added.

Seagate’s Exos schedule shows how the timeline could play out. The 7,200 rpm nearline 12 TB HDD was Seagate’s highest selling enterprise product in the first quarter. Seagate launched its 14 TB Exos HDDs late last year and this spring with only a limited set of customers because the Exos X16 development was running ahead of schedule, according to Sahin.

“We wanted to make sure that we did not have the two products in the channel at the same time,” Sahin said.


16 TB Seagate Exos X16 HDD

Seagate CEO Dave Mosley said during a recent earnings call that he expects Seagate to begin ramping to high volume this year, with the 16 TB HDDs set to become the highest revenue producer by next spring.

List pricing for Seagate’s 6 Gbps SATA-based Exos X16 HDD is $629. The IronWolf 16 TB HDD lists at $609.99, and the IronWolf Pro, which offers a higher sustained data rate, is $664.99.

Seagate’s new Exos X16, IronWolf and IronWolf Pro 16 TB HDDs use a nine-platter design to boost areal density. Chen said other manufacturers will also use a nine-disk design — and potentially even more platters in the future — for enterprise capacity-optimized nearline HDDs.

But CMR HDDs aren’t the only option for hyperscalers seeking high-capacity storage. Seagate, Toshiba and Western Digital are also working on new HDDs that use shingled magnetic recording (SMR) technology, with tracks that overlap like the shingles on a roof to increase areal density.

SMR HDD use is typically restricted to workloads that write data sequentially, such as video surveillance and the internet of things. CMR drives write data randomly across the entire disk. SMR adoption has been low because users generally have to make host-side adjustments to use the HDDs without a performance hit. But industry initiatives could start to make it easier for customers to deploy SMR HDDs in the future.

The highest capacity SMR HDD today is 15 TB. Western Digital began shipping qualification samples of its Ultrastar DC HC620 host-managed SMR HDD last October. Seagate has also sampled an enterprise SMR-based 15 TB HDD, but it hasn’t launched it commercially, according to Sahin. He said Seagate plans to make available a 17 TB SMR HDD, based on the CMR-based Exos X16, later this year. Toshiba did not respond to requests for comment on its SMR HDD plans.

Even higher HDD capacities could hit the market when manufacturers start to ship drives that use heat-assisted magnetic recording (HAMR) and microwave-assisted magnetic recording (MAMR)technologies. Sahin said Seagate expects to make available HAMR-based 20 TB HDDs in late 2020. Toshiba hasn’t specified its roadmap but outlined plans to use MAMR and explore the use of HAMR technology.

Western Digital plans to introduce “energy-assisted” 16 TB CMR HDDs and 18 TB SMR HDDs later this year, according to Mike Cordano, the company’s president and COO. Cordano claimed during the company’s most recent earnings call that the new energy-assisted HDDs would contain fewer disks and heads than competitors’ options. Western Digital late last year had said that its MAMR-based 16 TB HDD would have eight platters.

IDC’s 2018 market statistics for 2.5-inch and 3.5-inch capacity-optimized HDDs showed Seagate in the lead with 47.8% of the unit shipments. Western Digital was next at 22.4% and Toshiba trailed at 9.8%. IDC’s overall HDD unit shipment statistics for 2018 had Seagate in the lead at 40.0%, Western Digital second at 37.2% and Toshiba at 22.8%.

All three vendors make available a wide range of client and enterprise HDDs, including mission-critical enterprise drives that spin at 10,000 rpm and 15,000 rpm.

Dig Deeper on Primary storage devices

Blade servers: An introduction and overview

Blade servers add muscle to demanding workloads and virtual data centers, but they also pose some concerns, including power consumption and management complexity.

Brien Posey

Microsoft MVP – SearchDataBackup

Check out the rest of our Server Month resources.

Blade servers have become a staple in almost every data center. The typical “blade” is a stripped-down modular server that saves space by focusing on processing power and memory on each blade, while forgoing many of the traditional storage and I/O functionality typical of rack and standalone server systems. Small size and relatively low cost makes blades ideal for situations that require high physical server density, such as distributing a workload across multiple Web servers).

But high density also creates new concerns that prospective adopters should weigh before making a purchase decision. This guide outlines the most important criteria that should be examined when purchasing blade servers, reviews a blade server’s internal and external hardware, and discusses basic blade server management expectations.

Internal 2U and 4U server characteristics

Form factor. Although blade server size varies from manufacturer to manufacturer, blade servers are characterized as full height or half height. The height aspect refers to how much space a blade server occupies within a chassis.

For more Server Month resources:

Unlike a rackmount server, which is entirely self-contained, blade servers lack certain key components, such as cooling fans and power supplies. These missing components, which contribute to a blade server’s small size and lower cost, are instead contained in a dedicated blade server chassis. The chassis is a modular unit that contains blade servers and other modules. In addition to the servers, a blade server chassis might contain modular power supplies, storage modules, cooling modules (i.e., fans) and management modules.

Blade chassis design is proprietary and often specific to a provider’s modules. As such, you cannot install a Hewlett-Packard (HP) Co. server in a Dell Inc. chassis, or vice versa. Furthermore, blade server chassis won’t necessarily accommodate all blade server models that a manufacturer offers. Dell’s M1000e chassis, for example, accommodates only Dell M series blade servers. But third-party vendors sometimes offer modules that are designed to fit another vendor’s chassis. For example, Cisco Systems Inc. makes networking hardware for HP and Dell blades.

Historically, blades’ high-density design posed overheating concerns, and they could be power hogs. With such high density, a fully used chassis consumes a lot of power and produces a significant amount of heat. While there is little danger of newer blade servers overheating (assuming that sufficient cooling modules are used), proper rack design and arrangement are still necessary to prevent escalating temperatures. Organizations with multiple blade server chassis should design data centers to use hot-row/cold-row architecture, as is typical with rack servers.

Processor support. As organizations ponder a blade server purchase, they need to consider a server’s processing capabilities. Nearly all of today’s blade servers offer multiple processor sockets. Given a blade server’s small form factor, each server can usually accommodate only two to four sockets.

Most blade servers on the market use Intel Xeon processors, although the Super Micro SBA-7142G-T4 uses Advanced Micro Devices (AMD) Inc.’s Opteron 6100 series processors. In either case, blade servers rarely offer less than four cores per socket. Most blade server CPUs have six to eight cores per socket. Some AMD Opteron-based processors, such as the 6100 series used by Super Micro, have up to 32 cores.

If you require additional processing power, consider blade modules that can work cooperatively, such as the SGI Altix 450. This class of blades can distribute workloads across multiple nodes. By doing so, the SGI Altix 450 offers up to 38 processor sockets and up to 76 cores when two-core processors are installed.

Memory support. As you ponder a blade server purchase, consider how well the server can host virtual machines (VMs). In the past, blade servers were often overlooked as host servers, because they were marketed as commodity hardware rather than high-end hardware capable of sustaining a virtual data center. Today, blade server technology has caught up with data center requirements, and hosting VMs on blade servers is a realistic option. Because server virtualization is so memory-intensive, organizations typically try to purchase servers that support an enormous amount of memory.

Even with its small form factor, it is rare to find a blade server that offers less than 32 GB of memory. Many of the blade servers on the market support hundreds of gigabytes of memory, with servers like the Fujitsu Primergy BX960 S1 and the Dell PowerEdge M910 topping out at 512 GB.

As important as it is for a blade server to have sufficient memory, there are other aspects of the server’s memory that are worth considering. For example, it is a good idea to look for servers that support error-correcting code (ECC) memory. ECC memory is supported on some, but not all, blade servers. The advantage to using this type of memory is that it can correct single-bit memory errors, and it can detect double-bit memory errors. 

Drive support. Given their smaller size, blade servers have limited internal storage. Almost all the blade servers on the market allow for up to two 2.5-inch hard drives. While a server’s operating system (OS) can use these drives, they aren’t intended to store large amounts of data.

If a blade server requires access to additional storage, there are a few different options available. One option is to install storage modules within the server’s chassis. Storage modules, which are sometimes referred to as storage blades or expansion blades, can provide a blade server with additional storage. A storage module can usually accommodate six 2.5-inch SAS drives and typically includes its own storage controller. The disadvantages to using a storage module are that storage modules consume chassis space and the total amount of storage it provides is still limited.

Organizations that need to maximize chassis space for processing (or provide blade servers with more storage than can be achieved through storage modules) typically deploy external storage, such as network-attached storage or storage area network (SAN). Blade servers can accept Fibre Channel mezzanine cards, which can link a blade server to a SAN. In fact, blade servers can even boot from a SAN, rendering internal storage unnecessary.

If you do use internal storage or a storage module, verify that the server supports hot-swappable drives so that you can replace drives without taking the server offline. Even though hot-swappable drives are standard features among rackmount servers, many blade servers do not support hot-swappable drives.

Expansion slots. While traditional rackmount servers support the use of PCI Express (PCIe) and PCI eXtended (PCI-X) expansion cards, most blade servers cannot accommodate these devices. Instead, blade servers offer expansion slots that accommodate mezzanine cards, which are PCI based. Mezzanine card slots, which are sometimes referred to as fibers, are referred to by letter, where the first slot is A, the second slot is B and so on.

We refer to mezzanine slots this way because blade server design has certain limits and requires consistent slot use. If in one server, you install a Fibre Channel card in slot A, for example, every other server in the chassis is affected by that decision. You could install a Fibre Channel card into slot A on your other servers or leave slot A empty, but you cannot mix and match. You cannot, for example, place a Fibre Channel card in slot A on one server and use slot A to accommodate an Ethernet card on another server. You can, however, put a Fibre Channel card in slot A and an Ethernet card in slot B — as long as you do the same on all other servers in the chassis (or, alternatively, leave all slots empty).

External blade server characteristics

Power. Blade servers do not contain a power supply. Instead, the power supply is a modular unit that mounts in the chassis. Unlike a traditional power supply, a blade chassis power supply often requires multiple power cords, which connect to multiple 20 ampere utility feeds. This ensures that no single power feed is overloaded, and in some cases provides redundancy.

Another common design provides for multiple power supplies. For example, the HP BladeSystem C3000 enclosure supports the simultaneous use of up to eight different power supplies, which can power eight different blade servers.

Network connectivity. Blade servers almost always include 2 GB network interface cards (NICs) that are integrated into the server. However, some servers, such as the Fujitsu Primergy BX960 S1, offer 10 GB NICs instead. Unlike a rackmount server, you cannot simply plug a network cable into a blade server’s NIC. The chassis design makes it impossible to do so. Instead, NIC ports are mapped to interface modules, which provide connectivity on the back of the chassis. The interesting thing about this design is that a server’s two NIC ports are almost always routed to different interface modules for the sake of redundancy. Additional NIC ports can be added through the use of mezzanine cards.

User interface ports. The interface ports for managing blade servers are almost always built into the server chassis. Each chassis typically contains a traditional built-in keyboard, video and mouse (KVM) switch, although connecting to blade servers through an IP-based KVM may also be an option. In addition, the chassis almost always contains a DVD drive that can be used for installing software to individual blade servers. Some blade servers, such as the HP ProLiant BL280c G6, contain an internal USB port and an SD card slot, which are intended for use with hardware dongles.

Controls and indicators. Individual blade servers tend to be very limited in terms of controls and indicators. For example, the Fujitsu Primergy BX960 S1 only offers an on-off switch and an ID button. This same server has LED indicators for power, system status, LAN connection, identification and CSS.

Often the blade chassis contains additional controls and indicators. For example, some HP chassis include a built in LCD panel that allows the administrator to perform various configuration and diagnostic tasks, such as performing firmware updates. The precise number and purpose of each control or indicator will vary with each manufacturer and their blade chassis design.

Management features for 2U and 4U servers

Given that blade servers tend to be used in high-density environments, management capabilities are central. Blade servers should offer diagnostic and management capabilities at both the hardware and the software level.

Hardware-based management features. Hardware-level monitoring capabilities exist so that administrators can monitor server health regardless of the OS that is running on the server. Intelligent Platform Management Interface (IPMI) is one of the most common and is used by the Dell PowerEdge M910 and the Super Micro SBA-7142G-T4.

IPMI uses a dedicated low-bandwidth network port to communicate a server’s status to IPMI-compliant management software. Because IPMI works at the hardware level, the server can communicate its status regardless of the applications that run on the server. In fact, because IPMI works independently of the main processor, it works even if a server isn’t turned on. The IPMI hardware can do its job as long as a server is connected to a power source.

Blade servers that support IPMI 2.0 almost always include a dedicated network port within the server’s chassis that can be used for IPMI-based management. Typically, a single IPMI port services all servers within a chassis. Unlike a rack server, each server doesn’t need its own management port.

Blade servers can get away with sharing an IPMI port because of the types of management that IPMI-compliant management software can perform. Such software (running on a PC) is used to monitor things like temperature, voltage and fan speed. Some server manufacturers even include IPMI sensors that are designed to detect someone opening the server’s case. As previously mentioned, blade servers do not have their own fans or power supplies. Cooling and power units are chassis-level components.

Software-based management features. Although most servers offer hardware-level management capabilities, each server manufacturer also provides their own management software as well, although sometimes at an extra cost. Dell, for example, has the management application OpenManage, while HP provides a management console known as the HP Systems Insight Manager (SIM). Hardware management tools tend to be diagnostic in nature, while software-based tools also provide configuration capabilities. You might, for example, use a software management tool to configure a server’s storage array. As a general rule, hardware management is fairly standardized.

Multiple vendors support IPMI and baseboard management controller (BMC), which is another hardware management standard.  Some servers, such as the Dell PowerEdge M910, support both standards. Management software, on the other hand, is vendor-specific. You can’t, for example, use HP SIM to manage a Dell server. But you can use a vendor’s management software to manage different server lines from that vendor. For example, Dell OpenManage works with Dell’s M series blade servers, but you can also use it to manage Dell rack servers such as the PowerEdge R715.

Because of the proliferation of management software, server management can get complicated in large data centers. As such, some organizations try to use servers from a single manufacturer to ease the management burden. In other cases, it might be possible to adopt a third-party management tool that can support heterogeneous hardware, though the gain in heterogeneity often comes at a cost of management granularity. It’s important to review each management option carefully and select a tool that provides the desired balance of support and detail.

ABOUT THE AUTHORBrien M. Posey has received Microsoft’s Most Valuable Professional award six times for his work with Windows Server, IIS, file systems/storage and Exchange Server. He has served as CIO for a nationwide chain of hospitals and healthcare facilities and was once a network administrator for Fort Knox.

What did you think of this feature? Write to SearchDataCenter.com’s Nicole Harding about your data center concerns at nharding@techtarget.com.

This was last published in April 2011-timeless!

The vSAN stretched cluster type spreads HCI love for HA, DR

How would your hyper-converged infrastructure benefit from using stretched clusters?

VMware vSAN stretched clusters enable admins to spread hyper-converged infrastructures across two physical locations. Learn more about them and their benefits.

Robert Sheldon

Contributor – SearchSQLServer

A hyper-converged infrastructure based on VMware virtualization technologies uses VMware’s vSAN to provide software-defined storage to the HCI cluster. VMware supports several types of vSAN clusters, including the stretched cluster.

Stretched clusters let administrators implement an HCI that spans two physical locations. An IT team can use a stretched cluster as part of its disaster recovery strategy or to manage planned downtime to ensure the cluster remains available and no data is lost.

In this article, we dig into the stretched cluster concept to get a better sense of what it is and how it works. But first, let’s delve a little deeper into VMware vSAN and the different types of clusters VMware’s HCI platform supports.

The vSAN cluster

An HCI provides a tightly integrated environment for delivering virtualized compute and storage resources and, to a growing degree, virtualized network resources. It’s typically made up of x86 hardware that’s optimized to support specific workloads. HCIs are known for being easier to implement and administer than traditional systems, while reducing capital and operational expenditures, when used for appropriate workloads. Administrators can centrally manage the infrastructure as a single, unified platform.

Some HCIs, such as the Dell EMC VxRail, are built on VMware virtualization technologies, including vSAN and the vSphere hypervisor. VMware has embedded vSAN directly into the hypervisor, resulting in deep integration with the entire VMware software stack.

An HCI based on vSAN is made up of multiple server nodes that form an integrated cluster, with each node having its own DAS. The vSphere hypervisor is also installed on each node, making it possible for vSAN to aggregate the cluster’s DAS devices to create a single storage pool shared by all hosts in the cluster.

VMware supports three types of clusters. The first is the standard cluster, located in a single physical site with a minimum of three nodes and maximum of 64. VMware also supports a two-node cluster for smaller implementations, but it requires a witness host to serve as a tiebreaker if the connection is lost between the two nodes.

The third type of cluster VMware vSAN supports is the stretched cluster.

The vSAN stretched cluster

A stretched cluster spans two physically separate sites and, like a two-node cluster, requires a witness host to serve as a tiebreaker. The cluster must include at least two hosts, one for each site, but it will support as many as 30 hosts across the two sites.

When VMware first introduced the stretched cluster, vSAN required hosts be evenly distributed across the two sites. As of version 6.6, vSAN supports asymmetrical configurations that allow one site to contain more hosts than the other. However, the two sites combined are still limited to 30 hosts.A stretched cluster spans two physically separate sites and, like a two-node cluster, requires a witness host to serve as a tiebreaker.

Because the vSAN cluster is fully integrated into vSphere, it can be deployed and managed just like any other cluster. The cluster provides load balancing across sites and can offer a higher level of availability than a single site. Data is replicated between the sites to avoid a single point of failure. If one site goes offline, the vSphere HA (High Availability) utility launches the virtual machines (VMs) on the other site, with minimum downtime and no data loss.

A stretched cluster is made up of three fault domains: two data sites and one witness host. A fault domain is a term that originated in earlier vSAN versions to describe VM distribution zones that support cross-rack fault tolerance. If the VMs on one rack became unavailable, they could be made available on the other rack (fault domain).

A stretched cluster works much the same way, with each site in its own fault domain. One data site is designated as the preferred site (or preferred fault domain) and the other is designated as the secondary site. The preferred site is the one that remains active if communication is lost between the two sites. Storage on the secondary site is then considered to be down and the components absent.

The witness host is a dedicated ESXi host — physical server or virtual appliance — that resides at a third site. It stores only cluster-specific metadata and doesn’t participate in the HCI storage operations, nor does it store or run any VMs. Its sole purpose is to serve as a witness to the cluster, primarily acting as a tiebreaker when network connectivity between the two sites is lost.

During normal operations, both sites are active in a stretched cluster, with each maintaining a full copy of the VM data and the witness host maintaining VM object metadata specific to the two sites. In this way, if one site fails, the other can take over and continue operations, with little disruption to services. When the cluster is fully operational, the two sites and the witness host are in constant communication to ensure the cluster is fully operational and ready to switch over to a single site should disaster occur.

A VMware vSAN stretched cluster illustrated
A stretched cluster allows admins to spread an HCI across two physical locations for disaster recovery and other high availability purposes.

The HCI-VMware mix

Administrators can use VMware vCenter Server to deploy and manage a vSAN stretched cluster, including the witness host. With vCenter, they can carry out tasks such as changing a site designation from secondary to primary or configuring a different ESXi host as the witness host. Implementing and managing a stretched cluster is much like setting up a basic cluster, except you must have the necessary infrastructure in place to support two locations.

For organizations already committed to HCIs based on VMware technologies, the stretched cluster could prove a useful tool as part of their DR strategies or planned maintenance routines. For those not committed to VMware but considering HCI, the stretched cluster could provide the incentive to go the VMware route.

This was last published in May 2019

Dig Deeper on Hyper-Converged Infrastructure Implementation

Microsoft and Oracle join forces to offer inter-cloud connectivity

Pair will provide direct connections between their clouds, enabling workloads to use services across Oracle and Azure public clouds

Cliff Saran

Managing Editor – TechTarget – ComputerWeekly.com

06 Jun 2019 9:49

Oracle has partnered with Microsoft to offer interoperability across their respective cloud services. The companies say the agreement will enable customers to migrate and run the same enterprise workloads across both Microsoft Azure and Oracle Cloud.

Through the partnership, the pair said enterprises would be able to connect Azure services, such as Analytics and AI, to Oracle Cloud services, including Autonomous Database. By enabling customers to run one part of a workload within Azure and another part of the same workload within the Oracle Cloud, the partnership delivers a highly optimised, best-of-both-clouds experience, say Microsoft and Oracle.

Scott Guthrie, executive vice-president of Microsoft’s cloud and AI division, said: “As the cloud of choice for the enterprise, with over 95% of the Fortune 500 using Azure, we have always been, first and foremost, focused on helping our customers thrive on their digital transformation journeys.”

Don Johnson, executive vice-president, Oracle Cloud Infrastructure (OCI), said: “Oracle and Microsoft have served enterprise customer needs for decades. With this partnership, our joint customers can migrate their entire set of existing applications to the cloud without having to re-architect anything, preserving the large investments they have already made.”

Organisations that run Oracle and Microsoft systems said they would find the partnership beneficial.

Ken Braud, senior vice-president and CIO at Halliburton, said: “This alliance gives us the flexibility and ongoing support to continue leveraging our standard architectures, while allowing us to focus on generating business outcomes that maximise returns for our shareholders.”

The partnership provides multicloud flexibility for organisations to support new business opportunities. Sally Gilligan, chief information officer at Gap, said: “As we look to bring our omnichannel experience closer together and transform the technology platform that powers the Gap brands, the collaboration between Oracle and Microsoft will make it easier for us to scale and deliver capabilities across channels.”

Read more about multiclouds

Read more on Cloud applications