When a NoOps implementation is — and when it isn’t — the right choice

How does your organization approach automation and NoOps initiatives?

NoOps skills and tools are highly useful regardless of the IT environment, but site reliability engineering brings operations admins into development when an organization can’t afford to lose them.

Emily Mell

Assistant Site Editor – SearchITOperations

For some organizations, NoOps is a no-go; for others, it’s the only way to go.

Some organizations envision that NoOps’ benefits of peak infrastructure and application automation and abstraction eliminate the need for operations personnel to manage the IT environment.

But not every IT environment is cut out for a NoOps implementation. Site reliability engineering (SRE) is a useful middle ground for organizations that aren’t ready or equipped for NoOps.

Will NoOps replace DevOps?

The truth is that NoOps is a scenario reserved almost exclusively for startup organizations that start out on A-to-Z IT automation tools and software. In cloud deployments, with no hardware or data centers to manage, developers can code or automate operations tasks — and operations administrators might struggle to find a seat at the table.

Instead, organizations that start from day one with automated provisioning, deployment, management and monitoring should hire an operations specialist or an IT infrastructure architect to help the development team set up environments and pipelines, said Maarten Moen, consultant at 25Friday, a consulting agency based in Hilversum, North Holland, which helps startups set up NoOps environments.

Development or technology engineers move more to a full-stack engineering background. They can do the front and back end, but also the infrastructure and the cloud structure behind it. Maarten Moenconsultant, 25Friday

DevOps is still too siloed and interdependent for the speed of modern business activity, Moen said. This assessment holds weight for the startups 25Friday advises: there is no reason for a five-person company to split into distinct development and operations teams.

Instead, Moen suggests organizations instate a center of excellence, with one to three senior-level operations consultants or advisors to help development teams set up — but not implement — the infrastructure and share best practices. Development teams implement the infrastructure so that they’re familiar with how it works and how to maintain it.

Legacy applications are incompatible with NoOps

A NoOps changeover won’t work for organizations with established legacy applications — which, in this case, means any app five years old or more — and sizeable development and operations teams. The tools, hardware and software necessary to maintain these apps require operational management. Odds are high that a legacy application would need to be completely rebuilt to accommodate NoOps, which is neither cost-effective nor reasonably feasible.

Moreover, most organizations don’t have a loosely coupled system that facilitates a NoOps structure, said Gary Gruver, consultant and author of several DevOps books. Small independent teams can only do so much, and building a wall between them doesn’t make sense. In the end, someone must be accountable to ensure that the application and infrastructure functions in production, both on premises and in cloud environments, he said.

NoOps vs. SRE

When a larger organization adopts infrastructure-as-code tools, senior operations staff often act as advisors, and they then shift into engineering roles.

But downsizing isn’t always the answer. The SRE role, which emphasizes automation to reduce human error and ensure speed, accuracy and reliability, has supplanted the operations administrator role in many IT organizations because it places ops much closer to, or even within, the development team.

“It’s operations work, and it has been, whether we’re analysts, sys admins, IT operations, SREs, DevOps — it doesn’t matter,” said Jennifer Davis, principle SRE at RealSelf Inc., a cosmetic surgery review company based in Seattle, Wash., and an O’Reilly Media author, in a talk at the Velocity conference last month in New York.

Operations work can be done by anyone, but organizations differ on how they handle that fact. Does an organization eliminate the operations team altogether or simply reorganize them into SREs and retain their operational knowledge and experience?

At RealSelf, Davis’ SRE team educates and mentors developers in all aspects of operations via positions in sprint teams.

“Some of the critical skills that we have as operations folks [are] filling in gaps with relevant information, identifying information that is nonessential, and prioritizing a wide array of work,” she said.*

Is NoOps feasible?

Basically, NoOps is the same thing as no pilots or no doctors.

Jennifer Davis,principle SRE, RealSelf

NoOps implementations by 25Friday have been successful, Moen said. RealSelf’s Davis, however, argues that NoOps is, in general, not feasible for non-startups.

“Basically, NoOps is the same thing as no pilots or no doctors,” Davis said. “We need to have pathways to use the systems and software that we create. Those systems and software are created by humans — who are invaluable — but they will make mistakes. We need people to be responsible for gauging what’s happening.”

Human fallibility has driven the move to scripting and automation in IT organizations for decades. Companies should strive to have as little human error as possible, but also recognize that humans are still vital for success.

Comprehensive integration of AI into IT operations tools is still several years away, and even then, AI will rely on human interaction to operate with the precision expected. Davis likens the situation to the ongoing drive for autonomous cars: They only work if you eliminate all the other drivers on the road.

Next steps for operations careers

As IT organizations adopt modern approaches to deployment and management, such as automated tools and applications that live on cloud services, the operations team will undoubtedly shrink. Displaced operations professionals must then retrain in other areas of IT.

Some move into various development roles, but quality assurance and manual testing are popular career shifts in Europe, especially for old-fashioned professionals whose jobs have been automated out of their hands, Moen said. SRE is another path that requires additional training, but not a complete divergence from one’s existing job description.

Admins should brush up on scripting skills, as well as the intricacies of distributed platforms to be ready to both hand over any newer self-service infrastructure to developers and train them to use it properly. They should study chaos engineering to help developers create more resilient applications and team up with both the ops and dev teams to create a best practices guideline to manage technical debt in the organization. This becomes paramount as infrastructure layouts grow more complex.

NoOps and SRE are both possible future directions on IT organizations’ radars. NoOps mini-environments can live within a fuller DevOps environment, which can be managed by site reliability engineers. Ultimately, the drive for automation must reign, but not with the extermination of an entire profession that ensures its continued success.

Site editor David Carty contributed to this article.

*Information changed after publication.

This was last published in November 2018

AIOps tools supplement — not supplant — DevOps pipelines

How does your DevOps team apply AIOps to the data your CI/CD toolchain churns out?

While the line between DevOps and AIOps often seems blurred, the two disciplines aren’t synonymous. Instead, they present some key differences in terms of required skill sets and tools.

Will Kelly

DevOpsAgenda

Artificial intelligence for IT operations, or AIOps, applies AI to data-intensive and repetitive tasks across the continuous integration and development toolchain. 

DevOps professionals cite monitoring, task automation and CI/CD pipelines as prime areas for AIOps tools, but there’s still a lack of clarity around when and how broadly teams should apply AI practices.

Where AI meets IT ops

The terms AIOps and DevOps are both common in product marketing, but they’re not always used accurately. DevOps is driving a cultural shift in how organizations are structured, said Andreas Grabner, DevOps activist at Dynatrace, an application performance management company based in Waltham, Mass.

AIOps tools enable an IT organization’s traditional development, test and operations teams to evolve into internal service providers to meet the current and future digital requirements of their customers — the organization’s employees.

AIOps platforms can also help enterprises monitor data across hybrid architectures that span legacy and cloud platforms, Grabner said. These complex IT environments demand new tools and technologies, which both require and generate more data. Organizations need a new approach to capture and manage that data throughout the toolchain — which, in turn, drives the need for AIOps tools and platforms.

AIOps can also be perceived as a layer that runs on top of DevOps tools and processes, said Darren Chait, COO and co-founder of Hugo, a provider of team collaboration tools based in San Francisco. Organizations that want to streamline data-intensive, manual and repetitive tasks — such as ticketing — are good candidates for an AIOps platform proof-of-concept project.

In addition, AIOps tools offer more sophisticated monitoring capabilities than other software in the modern DevOps stack. AIOps tools, for example, monitor any changes in data that might have a significant effect on the business, such as those related to performance and infrastructure configuration drift. That said, AIOps tools might be unnecessary for simple monitoring requirements that are linear and straightforward.

The line between AIOps and DevOps

DevOps and AIOps tools are both useful in CI/CD pipelines and for production operations tasks, such as monitoring, systems diagnosis and incident remediation. But while there is some overlap between them, each of these tool sets is unique in its requirements for effective implementation. For example, AIOps automates machine language model training to complete a software build. AIOps tools must be adaptive to machine-learning-specific workflows that can handle recursion to support continuous machine language model training.

The AIOps automation design approach is fundamentally different from the repetition of the machine language training process: It’s recursive and conditional in nature, largely dependent upon the accuracy rating of procured data. The design approach also depends on selective data-extraction algorithms.

In terms of tool sets, DevOps engineers see Jenkins, CircleCI, Travis, Spinnaker and Jenkins X as CI/CD industry standards, but they aren’t AIOps-ready like tools such as Argo — at least not yet.

So, while AIOps augments DevOps with machine learning technology, AIOps isn’t the new DevOps — and ops teams should ignore the hype that tells them otherwise.

This was last published in January 2019

What roles do vCPE and uCPE have at the network edge?

Why might your organization avoid implementing universal or virtual CPE?

As service providers look to virtualize the edge and deliver network services faster, they’re turning to vCPE and uCPE that run services as software on generic hardware.

John Burke

CIO and Principal Research Analyst – Nemertes Research – SearchEnterpriseWAN

While software-defined networking is only just starting to gain significant traction within enterprise networks, it has transformed network service providers. They are bringing SDN technology to the enterprise edge now, in the form of virtualized customer premises equipment, or vCPE.

In the past, a provider delivering a set of network services would use specialized hardware to deliver the services. This hardware was most often a branch router, and it sometimes included one or more separate firewalls, a distributed denial-of-service defense device or WAN optimizer — all in the branch.

Now, the goal is to have generic hardware at the branch and have it run some or all of the software needed to provide all of the desired services.

Shifting the burden from hardware development to software development brings providers unprecedented flexibility and agility in addressing market needs. It also simplifies deployments if a single box — a so-called universal CPE, or uCPE — can run the branch end of any service needed.

VNFs: Making services more modular

Virtualization, uCPE and vCPE are creating new opportunities for both providers and customers to adopt new services, try new platforms and transform their IT infrastructures.

The traditional software-only delivery of a network function has focused on virtual appliances, which tend to fully replicate the functions of a hardware appliance in a single unit. The network functions virtualization approach separates the appliance into smaller function parcels — virtual network functions (VNFs) that cooperate to deliver network functionality.

A service provider can dynamically push VNFs down to the CPE platform to run on the customer premises, run the VNFs in server resources on the provider side of the edge or even run them in their own core — wherever makes the most sense for that service and that customer. Firewall functionality, for example, can often be best delivered on the provider side of a link — why deliver a packet that will be thrown away as soon as it hits the CPE? But compression services are best delivered on the customer side to maximize their effect.

Changing the WAN

Virtualization, uCPE and vCPE are creating new opportunities for both providers and customers to adopt new services, try new platforms and transform their IT infrastructures. Enterprises are keenly interested in software-defined WAN right now, and many providers use a vCPE model to deliver SD-WAN.

Some providers adopt a fully edge-hosted model, in which a uCPE box hosts a complete SD-WAN package — one that could run on dedicated hardware. Others deploy a hybrid edge or cloud model, where the SD-WAN depends — to some extent — on services delivered from the provider’s cloud. Still, others have a fully cloud-hosted model, like network-as-a-service providers delivering SD-WAN as a feature set service.

Whichever model a service provider uses, the number and breadth of vCPE deployments are exploding in the wake of providers’ internal SDN transitions and with the strength of interest in SD-WAN.

This was last published in January 2019

app virtualization (application virtualization)

What is the biggest benefit of application virtualization?

App virtualization (application virtualization) is the separation of an installation of an application from the client computer accessing it.

Posted by: Margaret Rouse

WhatIs.com

Contributor(s): Jack Madden and Robert Sheldon

…continue reading on the definition…

From the user’s perspective, the application works just like it would if it lived on the user’s device. The user can move or resize the application window, as well as carry out keyboard and mouse operations. There might be subtle differences at times, but for the most part, the user should have a seamless experience.

How application virtualization works

Although there are multiple ways to virtualize applications, IT teams often take a server-based approach, delivering the applications without having to install them on individual desktops. Instead, administrators implement remote applications on a server in the company’s data center or with a hosting service, and then deliver them to the users’ desktops.

To make this possible, IT must use an application virtualization product. Application virtualization vendors and their products include Microsoft App-V, Citrix XenApp, Parallels Remote Application Server, and VMware ThinApp or App Volumes — both of which are included with VMware Horizon View. VMware also offers Horizon Apps to further support app virtualization.

The virtualization software essentially transmits the application as individual pixels from the hosting server to the desktops using a remote display protocol such as Microsoft RemoteFX, Citrix HDX, or VMware View PCoIP or Blast Extreme. The user can then access and use the app as though it were installed locally. Any user actions are transmitted back to the server, which carries them out.

Benefits of app virtualization

App virtualization can be an effective way for organizations to implement and maintain their desktop applications. One of the benefits of application virtualization is that administrators only need to install an application once to a centralized server rather than to multiple desktops. This also makes it simpler to update applications and roll out patches.

In addition, administrators have an easier time controlling application access. For example, if a user should no longer be able to access an application, the administrator can deny access permissions to the application without having to uninstall it from the user’s desktop.

App virtualization makes it possible to run applications that might conflict with a user’s desktop applications or with other virtualized applications.

Users can also access virtualized applications from thin clients or non-Windows computers. The applications are immediately available, without having to wait for long install or load operations. If a computer is lost or stolen, sensitive application data stays on the server and does not get compromised.

Drawbacks of app virtualization

Application virtualization does have its challenges, however. Not all applications are suited to virtualization. Graphics-intensive applications, for example, can get bogged down in the rendering process. In addition, users require a steady and reliable connection to the server to use the applications.

The use of peripheral devices can get more complicated with app virtualization, especially when it comes to printing. System monitoring products can also have trouble with virtualized applications, making it difficult to troubleshoot and isolate performance issues.

What about streaming applications?

With streaming applications, the virtualized application runs on the end user’s local computer. When a user requests an application, the local computer downloads its components on demand. Only certain parts of an application are required to launch the app; the remainder download in the background as needed.

Once completely downloaded, a streamed application can function without a network connection. Various models and degrees of isolation ensure that streaming applications do not interfere with other applications, and that they can be cleanly removed when the user closes the application.

This was last updated in September 2017

Continue Reading …

About app virtualization (application virtualization)

Related Terms

Goliath Technologies

Goliath Technologies is a performance monitoring software vendor that offers tools to monitor and analyze data centers, back-end … See complete definition

Parallels

Parallels is an application and desktop virtualization software vendor that offers management and delivery platforms for Apple … See complete definition

Remote Desktop Services (RDS)

Remote Desktop Services (RDS) is an umbrella term for several specific features of Microsoft Windows Server that allow users to … See complete definition

Windows Autopilot -definition

How does Windows Autopilot stack up against other vendors’ enrollment and configuration tools?

Windows Autopilot is a desktop provisioning tool native to Windows 10 that allows IT professionals to automate image deployment of new desktops with preset configurations.

Posted by: Margaret Rouse

WhatIs.com

Contributor(s): John Powers

…continue reading on the definition…

With Windows Autopilot, IT professionals can set new desktops to join pre-existing configuration groups and apply profiles to the desktops so new users can access fully functional desktops from their first logon. Windows Autopilot can simplify the out-of-box experience (OOBE) for new desktop users in an organization.

To use Windows Autopilot, IT must connect the devices to a Microsoft Azure portal and enroll them in Microsoft Windows Azure Active Directory. Once IT enrolls the device or devices, it can assign a desktop image to each user before users register their devices. With the images in place, the final step is for users to log on and input their company credentials for identity verification.

How it works

The Windows Autopilot device registration process begins with IT logging a new device’s hardware ID and device type. Windows Autopilot requires IT professionals to add this information in the form of a comma-separated values (CSV) file to their organization’s Windows Autopilot registry.

Each device needs a Windows Autopilot profile as well, which defines the terms of the device’s desktop deployment. The profiles can overwrite local desktop administrator privileges, disable Microsoft Cortana and other local applications, apply custom privacy settings, and more.

Autopilot allows IT to create custom deployment profiles.

Once IT professionals define the profiles for each new device they configure with Windows Autopilot, they should wait for the user to first access the device before taking any further action. Once the user accesses the device and loads the new desktop, IT can perform its typical endpoint management practices.

Windows Autopilot includes several zero-touch provisioning (ZTP) features such as self-deploying mode for the simplest experience for the end user. All the users have to do is input their information and watch as the desktop provides updates on their enrollment status. 

Benefits/drawbacks

Windows Autopilot lets IT professionals quickly deploy profiles to new devices with basic profile settings. The simplistic CSV file format for inputting new devices allows IT to deploy preset profiles to a large number of devices all at once.

Windows Autopilot doesn’t offer as many desktop profile options as some other configuration tools. Windows Configuration Designer, for example, is a tool designed for provisioning in bring your own device (BYOD) use cases that provides a high level of control over each profile by allowing IT to alter more specific configurations. Microsoft Intune, an enterprise mobility management platform in Microsoft’s cloud-based Enterprise Mobility and Security offering, provides additional mobile device management options.

IT professionals can use Windows Autopilot with these additional utilities to perform different provisioning functions. Windows Autopilot offers quick configurations and a simple OOBE for end users, and other tools target more complicated desktop image profiles or profiles for mobile devices. This was last updated in December 2018

Continue Reading …

About Windows Autopilot

Related Terms

application performance monitoring (APM)

Application performance monitoring (APM) is software designed to help IT administrators ensure that the applications users work … See complete definition

desktop computer

A desktop computer is a personal computing device designed to fit on top of a typical office desk. See complete definition

Microsoft Windows System Image Manager (SIM)

Microsoft Windows System Image Manager (SIM) is a tool in the Windows Assessment and Deployment Kit IT professionals can use to … See complete definition

How to address endpoint security issues caused by users

Which end-user security mistakes have you come across most often?

Certain behaviors, such as ignoring patches, create security issues on the endpoints users work with. IT should enforce policies that prevent users from taking these damaging actions.

Kevin Beaver

Principle Logic, LLC – SearchSecurity

A crucial function of endpoint security is protecting users from their own mistakes and missteps.

From human error to technical oversights and weaknesses in business processes, there are many ways that users can cause endpoint security issues. Users can make mistakes even if they understand the risks to the business because their desire for expediency and instant gratification is too strong. Some of the problems are the same behaviors IT professionals have been fighting for decades, but others aren’t as obvious.

There’s no amount of security awareness and training that will make this go away completely, but IT professionals must understand each of the endpoint security issues users might cause and the best practices for handling them.

Endpoint security issues caused by users

Choosing weak passwords. Password policies for Windows domains, websites, applications and mobile devices are often lax. Users follow whatever guidance they are given even if it’s not good advice. This leads them to create passwords that hackers can easily guess or crack. Users share the passwords between systems sometimes — mixing both personal and business passwords — and might write them down and store them on sticky notes.

Ignoring patch notifications. Because most users don’t see the value in running patches and rebooting their desktops and apps, they likely ignore notifications for patches whether the patches are for desktops, such as Microsoft Windows or Apple macOS, or third-party software, such as Java and Adobe Acrobat Reader. Doing so creates security vulnerabilities in the endpoints.

Clicking links and opening attachments without question. It’s so simple for hackers to get into a network by phishing users. Users might click malicious links, open unknown attachments or even provide their login credentials when prompted. If phishing security is not up to snuff, no other security controls matter because once an attacker has a user’s login information, he has full access to the endpoint. If phishing security is not up to snuff, no other security controls matter.

Bypassing security controls. Most of the time, endpoints automatically give users local administrator rights. With these rights, users can perform tasks that are ultimately harmful to their endpoint’s security, such as disabling antimalware software and installing their own questionable software.

Unfortunately, it can be difficult to detect the harmful changes a user might make on his device if he has local admin rights. As a result, IT might not realize that a user has done something dangerous, which could leave business assets exposed.

Connecting to unsecured Wi-Fi. Users might connect to practically any open wireless network without question if it means they can access the internet. Even if IT instructs users to verify their connections and to only use trusted Wi-Fi networks, all those teachings go out the window the second a user only needs to get online for a few minutes to check email or social media.

Buying and selling personal computers without resetting them. It’s amazing how many people don’t reset their computers by reinstalling the OS when they sell them. Users who do not reinstall the OS expose personal information and place business assets, such as virtual private network connections, at risk. It is dangerous to recycle old computers without taking precautions.

How can IT address these endpoint security issues?

Users can be careless and often take the path of least resistance simply because it’s most convenient. In reality, a small number of people and choices cause the majority of endpoint security issues.

IT can’t control user behavior, but it can control users’ desktop permissions. IT professionals must enforce security policies that prevent users from taking harmful actions rather than only telling users how to avoid those actions.

To effectively prevent these endpoint security issues, IT must determine what specific user actions are undermining the security program. IT pros should create processes and controls to prevent user mistakes, evaluate how effective they are and make alterations when necessary to ensure that the policies can handle the latest security threats.

This was last published in December 2018

Are hyper-converged infrastructure appliances my only HCI option?

Do you prefer to buy preconfigured hyper-converged appliances, deploy software-only HCI or build your own configuration?

Preconfigured hyper-converged appliances aren’t your only option anymore. Software-only and build-your-own hyper-converged infrastructure have unique pros and cons.

Alastair Cooke

SearchVirtualDesktop

Freelance trainer, consultant and blogger specializing in server and desktop virtualization

There are multiple ways to approach a hyper-converged infrastructure deployment, some of which give IT a little more control.

When we talk about building a hyper-converged infrastructure (HCI), the mental image is usually deploying some physical appliances using high-density servers and spending a few minutes with some wizard-driven software. But buying hyper-converged infrastructure appliances is just one way to do it.

As an IT professional, you can also deploy software-only HCI on your own servers. Or you can start from scratch and engineer your own infrastructure using a selection of hardware and software. The further you move away from the appliance model, however, the more you must take responsibility for the engineering of your deployment and problem resolution.

Let’s look more closely at hyper-converged infrastructure appliances and some do-it-yourself alternatives.

Preconfigured hyper-converged appliances

Hyper-converged infrastructure appliances wrap up all their components into a single order of code. The vendor does all of the component selection and the engineering to ensure that everything works together and performs optimally.

Usually, the hyper-converged appliance has its own bootstrap mechanism that deploys and configures the hypervisor and software with minimal input from IT. For many customers, this ease of use is a big reason for deploying HCI, making it possible to largely ignore the virtualization infrastructure and focus instead on the VMs it delivers.

Software-only HCI

One of the big reasons for selecting a software-only hyper-converged infrastructure is that it offers hardware choice. You may have a relationship with a preferred server vendor and need to use its hardware. Or you may simply want an unusual combination of server hardware.

Another example is that you may want a lower cost, single-socket server option, particularly if you are deploying to a lot of remote or branch offices. If you are deploying to retail locations, you may need servers that will fit into a shallow communications cabinet rather than a data center depth rack. Once you select your hardware, you are responsible for the consequences of those choices.

Once you select your hardware, you are responsible for the consequences of those choices. If you choose the wrong network interface card or a Serial-Attached SCSI host bus adapter, you may find support is problematic, or performance may not match your expectations.

HCI from scratch

You can also build your own hyper-converged infrastructure using off-the-shelf software and hardware, a hypervisor and some scale-out, software-defined storage (SDS) in VMs.

As with software-only HCI, you are taking responsibility for this decision and its consequences. You can probably buy support for the hypervisor and the SDS, but what about potential interoperability issues between the layers? What is the service level for resolving performance problems?

Building a platform from scratch instead of buying preconfigured hyper-converged infrastructure appliances is only sensible if you have your own full support team providing 24/7 coverage.

This was last published in March 2018

Do I need converged or hyper-converged infrastructure appliances?

What’s the most important factor when choosing between converged and hyper-converged infrastructure?

Scalability, risk tolerance and cost all factor into the decision between converged and hyper-converged infrastructure. The two technologies have very different use cases.

Alastair Cooke

SearchVirtualDesktop

Freelance trainer, consultant and blogger specializing in server and desktop virtualization.

Converged and hyper-converged infrastructures have similar names, but they take very different approaches and solve different types of problems.

Converged infrastructure (CI) helps remove risk from a large virtualization deployment. Hyper-converged infrastructure (HCI) represents a rethinking of VM delivery, and it aims to simplify operation of a virtualization platform. Either converged or hyper-converged infrastructure appliances can deliver a faster time to value than assembling a virtualization platform from disparate components, but their resulting platforms will have different characteristics.

Converged infrastructure appliances

A converged infrastructure appliance is pre-configured to run a certain number of VMs, and it’s ready to be connected to an existing data center network and power supply from the time it’s built. Vendors build these appliances with components that include a storage array, some servers, network switches and all the required cables and connectors. Vendors assemble and test all of these components before delivering them to customers, and they control every aspect of the build, down to the certified firmware and driver levels for each part.

A small converged infrastructure appliance can take up just half a data center rack, and the largest might be five full racks. Usually, deployment involves professional services from the vendor, and every update requires more professional services. The aim of CI is to take the risk out of deploying a virtualization platform by having the vendor design and support the same platform across multiple customers. It is usually not designed to scale in place; for more capacity, organizations must buy additional complete converged infrastructure appliances.

Hyper-converged infrastructure

Hyper-converged infrastructure appliances are built around a single x86 server, and a group of appliances are configured together as a cluster that organizations can expand and contract by adding or removing appliances. The first consideration when choosing converged or hyper-converged infrastructure is scale.

HCI puts an emphasis on simplified VM management. It usually also includes some sort of backup capability and often a disaster recovery (DR) function. (Many hyper-converged products integrate with the public cloud for backup and DR.)

A significant feature of hyper-converged infrastructure appliances is that in-house IT professionals, rather than vendors’ professional services staff, can complete most functions, from initial deployment to adding nodes to the entire update process.

Converged or hyper-converged?

The first consideration when choosing converged or hyper-converged infrastructure is scale. A half rack of CI appliances will run 100 or more VMs, whereas five racks will run thousands of VMs. CI is not for small offices or small businesses. It’s suited for enterprises.

The second aspect is that CI is about reducing risk, even if that increases cost. All of the professional services that surround CI are areas where the vendor is paid to reduce the customer’s risk. Organizations buy CI for guaranteed outcomes, so they tend to be in risk-averse industries, such as banking, insurance, government and healthcare.

Hyper-converged infrastructure appliances are popular with organizations that do not want to think about the hardware or software underneath their VMs. These organizations want to manage a fleet of VMs with minimal effort because the value is in the applications inside those VMs, rather than the servers or hypervisors on which they run. HCI is ideally suited for scale-out workloads, such as VDI, or for nonproduction uses, such as test and development.

Some hyper-converged infrastructure appliances operate with just one or two nodes at a site. This makes them suitable for remote or branch office deployments, particularly where there are a large number of branches, such as in a retail chain. HCI’s built-in data protection is popular in these scenarios because it reduces the risk of data loss at the branch and, in some cases, allows one branch to provide DR capacity for another.

This was last published in June 2018

Evaluate hyper-converged for high-density data centers

How does your organization deal with increasing levels of density in your IT equipment?

Although rising data center densities don’t seem problematic for hyperscale providers, they create challenges for enterprises. Discover how hyper-converged systems can help.

Clive Longbottom

Independent Commentator and ITC Industry Analyst –ComputerWeekly.com

Modern IT equipment can handle more workloads in a smaller footprint, but this benefit also creates challenges for some enterprises.

A hyperscale cloud provider with full knowledge of its average workload can easily architect a dense compute platform. This is especially true when that average workload is actually millions of different workloads across a massive user base — which is the case for cloud providers like Amazon Web Services (AWS) and Microsoft Azure — or is a predictable set of workloads, such as those that run at Netflix, Facebook or Twitter. A single, logical platform that uses a massive amount of compute, storage and network nodes is fairly easy to create for these providers, since it’s a cookie-cutter approach; when AWS needs to add extra resources, there is very little systems-architecting involved.

However, it is more challenging for an organization with its own dedicated IT platform to support high-density data centers. For example, an organization won’t usually run thousands of servers as a single, logical platform that supports all workloads. Instead, there will most likely be a one-application-per-server or cluster model, virtualized servers that carry one or more workloads and private clouds that carry different workloads with dynamic resource sharing.

Fortunately, hyper-converged infrastructure (HCI) provides a way to better support high-density data centers.

Evaluate HCI — but carefully

HCI vendors engineer server, storage and networking components to work together and offer adequate cooling at the lowest cost, which enables organizations to support high-density data centers in a shorter period of time. However, there are still challenges with power, cooling and workload capabilities.

Power and cooling challenges are fairly easy to address. Standard power distribution systems can support most HCI systems in a data center facility. But if you want to build a platform that supports high-performance computing (HPC), where power densities might exceed existing distribution capabilities, you’ll face concerns. Decide whether expanded power and cooling capabilities in the facility are a worthwhile investment or if a colocation facility can meet these new demands.

It’s more difficult to address complex workload capabilities. If you have applications that are directly applied to a platform, hard partition the resources allocated to them, and like in traditional IT models, carefully plan to allow enough space for peak workloads.

When you work with applications that are applied to VMs, remember that each VM is a self-contained entity that carries a full stack of resource-hungry services. Containers share a lot of the same services as a VM, so allow for a greater number of containers to run on a given platform rather than VMs that carry out the same function.

So, just how many VMs or containers should you run on a given HCI platform? Be wary of figures given out by vendors, as the workloads they use to gather those numbers are often generalized and basic. For example, HCI vendors that sell a system focused on virtual desktop infrastructure might state that upwards of 200 desktops can run on their system. But that might only be true when desktops don’t have more than one OS and when users don’t need to log into them at the same time every day.

Many factors come into play when it’s time to choose an HCI vendor. Here are some important guidelines to keep in mind.

Look for vendors who run HCI systems as a proof of concept, allowing you to put your own workloads onto their platform and apply synthetic loads to gauge how many real-world VMs or containers the platform can take.

If you choose to build your own highly dense platform, employ experienced systems architects who can ensure that the interdependencies among compute, storage and network resources are carefully balanced and work well together. People with such skills are difficult to find, though — another reason why, outside of HPC, HCI is a better bet for high-density data centers than the build-it-yourself approach.

Next Steps

This was last published in September 2017

Pin down these hardware service contract details

What are some tips you can share for a solid hardware support agreement?

A server warranty won’t do much good when every second of downtime counts. Here’s how to hammer out a support agreement that addresses the particular needs of your company.

Brien Posey

Microsoft MVP – SearchDataBackup

The process of purchasing a server is relatively straightforward, but working out the details of a hardware service contract tends to require significantly more effort.

The need for a support contract is often overlooked because many in IT assume the hardware warranty protects the company if any problems occur. Although a warranty offers some assurances, it is often inadequate on its own.

For example, suppose a server’s system board fails, but it is covered under warranty. Each vendor has its own way of handling this type of issue. Typically, the administrator would need to ship the system board to the vendor before it sends a replacement. In contrast, a support contract can provide same-day service for the replacement and professional installation by a certified technician.

Map out the company’s needs

Prior to negotiating a hardware service contract, consider what matters most to the organization. Why obtain a support agreement in the first place? Does the organization require immediate access to hardware parts during a critical outage? Does the IT staff lack the technical skills to handle hardware-level repairs? Make sure that any service-level agreements the organization must adhere to are part of the equation.

Keep these factors in mind during discussions with a support vendor, and make sure you address three key areas in a hardware service contract.

Pin down terms to avoid a lengthy outage

First, negotiate the response time. When a critical issue hits the data center, there should be no doubt about the availability of the support vendor.

Most rapid response support contracts are expensive because they might require the provider to hire extra staff members. One way to reduce this cost is to negotiate a two-tier response time. For example, the contract might require the provider to respond within 48 hours for noncritical outages, but also to have a tech on site within an hour for any outages the organization deems critical. Prior to negotiating a hardware service contract, consider what matters most to the organization.

Second, lock down the availability of replacement hardware. It’s pointless to have a contract that requires the provider to respond to a critical outage within an hour if it takes the needed parts a week to arrive.

At one time, organizations relied almost exclusively on physical servers, and the server’s operating system was tied to its specific hardware configuration. Backups could not restore to dissimilar hardware. To account for this, most service agreements required providers to have exact duplicates of the organization’s hardware so it could swap out an entire server if necessary.

Server virtualization makes this less of an issue, but the provider’s inventory remains an important consideration. During an outage, an organization needs to get back online as quickly as possible. As such, a good contract for hardware service should require the support vendor to maintain an inventory of spare parts that match your server hardware. It is also a good idea to make sure the agreement provides loaner servers if the service vendor does not have the required parts immediately available.

The hardware support agreement should address the quantity of repair parts the vendor needs to keep in stock. Multiple servers can break at the same time. The support contract should eliminate any chance a cascading failure would leave the company vulnerable.

Third, consider adding warranty handling to the hardware service contract. This is less critical than the other items, but it is worth considering. Because some of the hardware is covered under warranty, ideally, the support provider should handle the warranty claims.

If a system board fails, then the service vendor should replace that system board with a spare, file a warranty claim and ship the failed part to the manufacturer. This offers the dual benefit of a quick system recovery and frees the IT department from dealing with warranties.

This was last published in April 2018