IT mulls multi-user Windows 10 as Server 2019 drops RDSH

Multi-user Windows 10 could replace RDSH, which isn’t in the first Windows Server 2019 preview. The move has ramifications for app compatibility, XenApp and Windows licensing.

Would you use multi-session Windows 10 in place of RDSH on Windows Server? Why or why not?

Alyssa Provazza

Executive Editor

27 Mar 2018

If RDSH will not be available in Windows Server 2019, IT pros are left wondering where that leaves their remote application delivery strategies.

The first preview of Windows Server 2019, which Microsoft released last week, does not include the Remote Desktop Session Host (RDSH) role. Several IT consultants and analysts said they expect Microsoft to allow for multi-user Windows 10 sessions as a replacement for server-based RDSH — a possibility that drew mixed reactions.

“If they indeed take the session-hosted capabilities out of 2019 … it has to still work somewhere, and the only way to do that is with a multi-user thing — that has to be from Microsoft,” said Cláudio Rodrigues, CEO of WTSLabs consultancy in Nepean, Ont. “At the end of the day, it might be just re-skinning the cat. People will buy into that, I can bet.”

Where’s RDSH?

Microsoft declined to comment on the future of RDSH and the possibility of multi-user Windows 10. The company will disclose more information soon, and Remote Desktop Services (RDS) is “not gone,” said Jeff Woolsey, principal program manager for Windows Server, on Twitter. RDS refers to the group of technologies that provide access to remote desktops and apps. RDSH is the component of RDS that allows multiple remote users to connect to session-based desktops and published apps.

It’s still possible that Microsoft might make the RDSH role available when Windows Server 2019 becomes generally available in the second half of this year, but it’s not likely, experts said. Microsoft had already removed the option for using a GUI to manage RDSH in Windows Server 2016 for the Semi-Annual Channel, leaving that capability available only to the Long-Term Servicing Channel (LTSC). Still, Microsoft could keep RDSH in Windows Server 2019 on the LTSC just for enterprise customers, said Jeff Wilhelm, CTO at Envision Technology Advisors, a solutions provider in Pawtucket, R.I.

“There’s going to have to be some point in time where Microsoft provides some clarity, not just from a technical perspective, but also from a roadmap perspective,” Wilhelm said. “I find it extremely hard to believe they would just unceremoniously cut that feature. I believe there will be a version that will allow RDS.”

Even if Windows Server 2019 does not support RDSH, this issue isn’t likely to affect a lot of organizations right away. Thirty-two percent of respondents in the TechTarget 2018 IT Priorities survey said they’re just moving to Windows Server 2016 this year, and 15% said they plan to move to Windows Server 2012.

“Probably 90% of the market will not even touch [Windows Server] 2019 from an application hosting perspective for minimum a year or two,” Rodrigues said.

Remote Desktop Session Host
How RDSH works

Potential for multi-user Windows 10

VDI takes a single-user Windows approach. There’s one VM running a desktop operating system per user. Multi-user Windows 10 would instead allow multiple user sessions to run on one VM directly on the client OS.

If multi-user Windows 10 indeed sees the light of day, it would be a similar approach to RDSH on Windows Server — which enables multiple user sessions to run on one server operating system — so customers wouldn’t experience a major change, experts said.

“It’s semantics,” Wilhelm said. “A multi-user OS is a server OS. There’s a lot of similarities between the Windows Server kernel and the Windows [desktop] kernel.” If people are expecting Windows 10 multi-user to magically solve application issues, they are going to be disappointed. Cláudio RodriguesCEO, WTSLabs

Desktops delivered from RDSH on shared Windows servers can help IT optimize resources and support more workers. Application compatibility can be an issue, however, if an app update prevents another app on the server from functioning properly, or if a legacy app isn’t supported. Multi-user Windows 10 could address some of those compatibility issues, because applications would run directly on a Windows client VM. IT would also see similar benefits as far as resource optimization. But legacy apps could remain a problem if they’re not supported on the latest version of Windows 10.

“A lot of those issues will still exist,” Rodrigues said. “If people are expecting Windows 10 multi-user to magically solve application issues, they are going to be disappointed.”

Plus, a lot of apps are built to detect when they’re connected to RDSH and Windows Server, to help ensure they can work with the server OS. It would be critical for Microsoft to allow applications to do something similar with Windows 10, Rodrigues said.

In the past, Microsoft licensing restrictions prevented Windows shops from running remote multi-user desktop and application sessions directly from the client OS. In July 2017, Microsoft changed its rules and allowed virtualization rights for Windows 10 VMs on Azure. Some observers pointed to that shift as a sign that Microsoft is trying to make multi-user Windows 10 remote sessions possible.

Jack Gold, founder of J.Gold Associates LLC

Jack Gold

Another option for Microsoft is instead moving customers to Remote Desktop modern infrastructure, which offers RDS running as a service on Azure.

“Most companies now with virtualized apps, they are cloud-based,” said Jack Gold, founder of J.Gold Associates LLC, a mobile analyst firm in Northborough, Mass. “Microsoft is probably seeing less demand for RDSH on local servers, and they are also trying to push their customers to use Azure.”

Multi-user Windows 10 questions remain

Citrix XenApp runs on RDSH, so it’s possible Citrix shops could see some changes with multi-user Windows 10 as well.

“Does this mean XenApp will die and XenDesktop will take over?” said James Rankin, solutions architect at Howell Technology Group, an IT consultancy in the U.K. “From a Citrix perspective, it might be a boon to them.”

Citrix could offer a multi-user XenDesktop capability that works the same way XenApp did, for example, but organizations would need to test that a multi-use desktop behaves with its applications the same way it did with the server-based capability, Rodrigues said.

James Rankin, solutions architect at Howell Technology Group

James Rankin

How to license multi-user Windows 10 also remains a question.

“Would you pay more if you activated the multi-user version?” Rankin said. “Or is it simply part of Enterprise [licensing]?”

The replacement of RDSH with multi-user Windows 10 would have licensing implications for IT at Northern Arizona University, which relies on Windows Server 2012 R2 and 2016 for app delivery.

“We are obviously concerned,” said Tobias Kreidl, desktop computing team lead at the university. “If that support changes, we would have to contemplate a lot of retooling and … figure out how the new licensing model would need to be applied. Right now, I think most people will adopt a ‘wait and see’ approach.”

Microsoft could use the existing RDS licensing server infrastructure and change the pricing and naming to create a model for multi-user client OS sessions, Rodrigues said.

“It’s a chance for them to simplify and in a way unify the licensing message,” he said.

That could help address IT’s calls for improved Microsoft licensing over the years, Gold said.

“Customers are saying, ‘Look, we just can’t deal with this anymore,'” he said.

Clear the confusion around Microsoft RDS

What is the ideal use case for Remote Desktop Services?

Microsoft Remote Desktop Services is often pigeonholed as a session-based virtualization tool. IT can, however, use it for both VDI and app virtualization, as well.

Robert Sheldon

Microsoft Remote Desktop Services, a platform for implementing virtualization on Windows Server computers, can cause confusion for some.

People often assume that VDI and RDS are mutually exclusive, and that RDS offers only session-based virtualization. Microsoft RDS, however, provides both session-based and VDI capabilities, as well as application virtualization.

Introducing Microsoft RDS

Windows Server includes all the components necessary to implement a scalable RDS deployment that can accommodate distributed and fluctuating workflows. IT teams can deploy session-based virtualization, VDI or both and implement application virtualization in either deployment.

With session-based virtualization, multiple users can connect remotely to a Windows Server computer that’s set up to host a multisession deployment. The users connect to a common server desktop, but each user works within an individual session that provides the resources for everyday tasks.

Session-based virtualization is simpler to implement and maintain than VDI, but it also means users share server resources, which can lead to contention and performance issues. In addition, some applications are not designed for session-based access by multiple users.

With VDI, each user connects to a dedicated virtual desktop running a Windows client operating system, such as Windows 10. The Windows Server computer hosting the virtual desktops uses the Hyper-V hypervisor to abstract the physical compute and storage resources and make them available to the individual VMs that support the desktops.

In this way, IT can provision each VM with the compute and storage resources to support the desktop and its applications, thus avoiding the contention and application issues with session-based deployments.

Organizations can deploy RDS on premises, in the Microsoft Azure cloud or both. Regardless of how IT uses it, Microsoft RDS allows end users to access their virtual desktops and applications from their own computers or mobile devices, whether they work behind the corporate firewall or connect from a public network.

Microsoft RDS desktops and applications

IT can provide users with either a full virtual desktop or with virtual applications. A full desktop delivers an experience similar to a local desktop, except users connect to the desktops remotely. Users can configure settings, work with files, install applications and more, the same way they would on their local devices. The goal is to deliver an experience that is as close as possible to working on a local desktop.

For application virtualization with RDS, organizations can use Microsoft RemoteApp to deliver virtual applications to users’ devices rather than providing them with full remote desktops. RemoteApp makes it possible to run an application on a Windows Server but deliver it so it appears to run on the user’s device. IT pros might do well to consider Microsoft RDS for their VDI and virtual application deployments.

Virtual applications can run within the context of a server session or within a VM running on Hyper-V. The application image is delivered over the network to the user’s computer, where agent software renders the image so the user can interact with the application directly. From the user’s perspective, the application operates as if it’s installed locally.

When people think about Microsoft’s virtual application capabilities, however, they usually think about App-V, a client-based virtualization product that works much differently from RemoteApp. Unlike RemoteApp, App-V runs an application in a sandbox on the endpoint separate from locally installed applications.

Microsoft RDS components

Many Microsoft RDS components are the same whether IT deploys session-based virtualization or VDI. IT implements the components as server roles on the Windows Server computers that make up the RDS platform.

  • Remote Desktop Session Host (RDSH) makes it possible to host session-based desktops and applications organized into collections that can span multiple RDSH servers. Because the RDSH role is specific to session-based virtualization, it’s better for smaller deployments where multiple users share the same RDSH server.
  • Remote Desktop Virtualization Host implements VDI deployments made up of VMs running on Hyper-V. Each VM is configured with one of the supported Windows guest OSes, which include the Enterprise editions of Windows 10, Windows 8.1, Windows 8 and Windows 7 SP1. All VMs within a collection must run the same guest OS. IT can define multiple collections to accommodate heterogeneous virtual desktops.
  • Remote Desktop Connection Broker is responsible for managing all the incoming remote connections from the client devices to the RDSH servers. The Connection Broker can handle up to 10,000 concurrent login requests, balancing the load across multiple RDSH servers. It requires SQL Server or Azure SQL Database to store deployment information such as connection states and user-host mappings.
  • Remote Desktop Gateway enables users to connect securely to their virtual desktops and applications on public networks. The gateway establishes an encrypted Secure Sockets Layer tunnel between the user’s device and the gateway server. It also authenticates the user into the deployment and passes traffic between the user’s device and the virtual resources.
  • Remote Desktop Web Access enables users to access their virtual desktops and applications through a web portal, which provides the structure to publish desktops and applications to client devices. The role uses Hypertext Transfer Protocol Secure to provide encrypted connectivity between the client devices and the virtualized resources.
  • Remote Desktop Licensing provides the structure to manage the client licenses users need to access virtualized resources.

Smaller organizations can combine roles onto one server depending on the workloads. Larger organizations can deploy each role to multiple dedicated servers to support scale-out scenarios. The RDS servers can run on either bare metal or within VMs.

A Microsoft RDS deployment also usually includes file storage for persisting configuration settings, personalization data and other resources. In addition, RDS requires Active Directory (AD) or Azure AD to control access to the virtualization deployment.

The final piece of the puzzle is the Remote Desktop client software that runs on the users’ devices. Microsoft provides clients for Windows, Apple macOS, Apple iOS and Google Android devices. The RDS platform utilizes the Remote Desktop Protocol to communicate between the servers and clients. Microsoft also provides the Remote Desktop web client, which lets users access their RDS desktops and applications with a compatible browser.

Make sense of Microsoft RDS

Despite its versatility, Microsoft RDS is primarily used for session-based deployments, with IT turning to other services for their VDI and virtual application needs. IT pros might do well to consider Microsoft RDS for their VDI and virtual application deployments, especially if they’ve already committed to Windows Server. Of course, it also depends on their virtualization requirements and what tools they already have in place.

This was last published in December 2018

Sharpen your DDoS detection skills with the right tool

Which type of tools do you employ for DDoS defense and why?

DDoS detection and prevention tools are more sophisticated than ever. But finding the right one for your company takes studying and asking vendors the right questions.

Kevin Beaver

Principle Logic, LLC

Distributed denial-of-service attacks are some of the most serious security attacks in modern computing, yet we tend to know very little about them. DDoS attacks are, in essence, launched by multiple systems — often compromised by malware — that target victim systems like servers and network infrastructure devices, as well as specific services such as web applications and domain name systems. Designed to prevent legitimate access to computing resources, DDoS attacks can start with something as seemingly benign as a handful of malformed packets and end up flooding their target systems with several hundred gigabytes — or more — of traffic per second that the system simply cannot handle.

As with all facets of IT and security, there is a variety of DDoS detection tools and technology available to minimize the impact of DDoS attacks on your organization, regardless of its size. But selecting the correct product requires an in-depth understanding of the various offerings, including knowledge of what each can and cannot do for your particular system and situation.

DDoS detection, protection tools explained

Professionals in the market for anti-DDoS tools can surf the web or walk expo floors of shows, such as the RSA Conference, and quickly seen that there are myriad security product and service vendors promising to protect your organization from DDoS attacks. Some DDoS prevention technologies are as simple as traditional load balancers and network firewalls; others are more modern, like next-gen firewalls that have a greater focus on the application layer. DDoS features built into these network security controls have been around for years and are very solid. They might be beneficial for warding off small to medium-sized DDoS attacks, but if the going gets really rough with a full-fledged attack that clogs the target with several hundred megabits — or more — of traffic per second, you’re likely going to need a dedicated DDoS protection system. As with most security controls, the more granular you go with purpose-built tools, the better.

How DDoS tools work

DDoS detection and prevention products come in two main flavors: on-premises and cloud-based. Many DDoS vendors have the ability to provide hybrid failover features involving both the cloud and on-premises equipment. These tools work by detecting and rejecting, or simply absorbing, DDoS attacks, all in real time. The impact of the various DDoS exploits such as ping floods and fraggle attacks, slow HTTP attacks and the recently popular Mirai botnet can all be minimized by these tools.

On-premises DDoS detection and prevention tools are in-line appliances that monitor and respond to denial-of-service attacks. These products are great for internet service providers and managed security service providers. They also scale nicely for large data centers. Even small businesses with high visibility that are being targeted can benefit from this approach. On the other hand, cloud-based DDoS services are application and content delivery networks that use cloud technologies to spread access and resources to protected systems across the globe rather than the system being available in only one location, which makes it vulnerable. By using DDoS protection tools and services, you put yourself in the catbird seat for when the going gets rough.

Where cloud-based services really shine is their ability to scale to accept the impact of extremely large DDoS attacks, which can have a quick and tangible impact on the systems under attack. All that’s typically required to set up cloud-based DDoS services are some simple domain name system record changes — i.e., A, CNAME and nameservers. You could have this type of service up and running in mere minutes after detecting a DDoS attack. Purchasing and installing an in-house appliance takes a bit longer.

Features to look for

Cloud-based services tend to be very popular given their scalability and ease of setup. Products designed for on-premises DDoS detection and protection can work just as well, but they might be costlier up front. In order to implement the best features, you need to step back and think about how your organization is at risk. Simply going with a service or product because of a nice website or sales presentation is not the best approach. You have to understand your environment and the threats it faces, along with the business impact of DDoS attacks and how you might minimize the risks. You need to be able to answer the following questions about your own environment:

  • What are our current denial-of-service risks? Are they tangible or just theoretical?
  • Is there anything we can do with our current setup and tools to minimize those risks?
  • How will DDoS protection tools integrate with our business continuity needs or with our incident response program?
  • Will we have to give up doing something we’re currently doing in order to take on yet another tool? What will that be? Will it require hiring new staff?

Once you have the necessary background information, you need to consider what, in an ideal world, your DDoS protection measures would consist of — cloud-based, on-premises or maybe a hybrid of both?

In addition, you need to ask the following questions of prospective vendors:

  • How do I know that your product or service will meet our needs? (This question ensures that they’re asking you the proper questions and fully understand your priorities.)
  • If we’re ever caught up in the middle of a DDoS attack, will your support personnel, developers and consultants be available to help us work through it all?

Just be sure to vet these companies and choose a solution in advance. Even though cloud-based DDoS services are simple and quick to set up, you don’t want to have to scramble and do that in the middle of an attack. If you think an on-premises product is better, then get a demo unit and try it out. Another thing to consider is contacting your internet and cloud service providers — again, in advance — and see how they can help with DDoS attacks as well.

How to approach DDoS tool selection

When weighing the merits of an anti-DDoS tool be sure to ask these questions:

  • Does a cloud-based or on-premises tool make the most sense? Or should you consider a hybrid option?
  • Will the new tool or service affect those you already have in place?
  • Will you need more staff to manage this new tool or service?
  • What does the vendor promise in terms of support in case of a DDoS attack?
  • How will the tool or service fit into your existing incident response plan? (And while you’re at it, make sure your incident response plan is up to date.)

Bottom line

There are many moving parts associated with DDoS detection and protection. Most people don’t know their current level of resilience. Why? Lack of information and feedback. It just doesn’t make good business sense to launch such an attack against yourself, nor would it be a simple task. You may never know just how things will go down. Still, by using DDoS protection tools and services, you put yourself in the catbird seat for when the going gets rough.

Just be careful. You cannot take a “buy, implement and forget it” approach to DDoS protection tools. Nor can you simply absolve yourself of this threat when your systems are hosted in the cloud or elsewhere outside of your environment. The most important aspect of DDoS protection goes back to what’s stated above: Make the decision in advance. This means selecting a technology and vendor to call on once the attacks begin or to have in place so that your DDoS response will engage automatically as soon as you need it to, which will truly minimize the impact and risk.

DDoS attacks are no different than other security incidents. Make sure that you fully address DDoS in your incident response plan as well as any applicable security policies and standards. Prevention is key. There’s no good excuse for having a vulnerability that facilitates a DDoS exploit in a web application, server or — heaven forbid — an internet of things device. This low-hanging fruit can be largely eliminated by proper and thorough vulnerability and penetration testing.

Beyond DDoS detection and prevention, you need to know your network and have good visibility into your environment — two things that are missing in way too many organizations. Once your security program reaches a level of maturity where all of this is in place, you can rest better knowing you’re prepared and that you’ll just have to tweak things as needed moving forward. Anything less and, well, who knows what will happen?

Next Steps

This was last published in March 2017

IT shops find their reasons to upgrade to Windows Server 2016

What drove your decision to upgrade, or not, to Windows Server 2016?

Selected technical features and technical support twilight inspires more Windows Server 2008 and 2012 IT shops to upgrade to Windows Server 2016.

Ed Scannell

Senior Executive Editor

08 Mar 2018

A growing number of IT shops have gained enough confidence to upgrade to Windows Server 2016, including some Windows Server 2008 R2 users who will skip Windows Server 2012 to access specific server OS features or preserve their technical support.

Some 32% of respondents to the TechTarget 2018 IT Priorities survey said they plan to roll out Windows Server 2016 over the course of this year, compared to only 15% who plan to implement Windows Server 2012. This compares to 20% who said they would roll out Windows Server 2012 in 2017 and 29% who planned to deploy Windows Server 2016.

IT Priorities survey results

In the 2016 IT Priorities survey, some 38.6%, or more than double the number of respondents in the 2018 survey, said they would implement Windows Server 2012 that year. Only a negligible number said they were interested in delivering that year because the finished version of the product was still in beta testing and wasn’t officially released until September.

Upgrade Windows Server for features, tech support

Respondents gave several different reasons for their decision to upgrade to Windows Server 2016. Some cited improvements to specific features such as Server Message Block 3.1.1, or to Storage Replica, which now supports asynchronous stretch clusters and RoCE V2 RDMA networks.

Others had more general long-term reasons to move up. Some Windows Server 2008 R2 users, for example, don’t want to pay for technical support when Microsoft officially ends support for it in mid-January 2020. With the current security landscape, you don’t want to be on an unsupported server OS. The specter of being unpatched works in Microsoft’s favor. Jim Gaynorresearch vice president, Directions on Microsoft

“[Microsoft is] counting down to Windows Server 2008 and R2 support ending,” said one Boston-based respondent with a large telecommunications company, referring to a recent bulletin issued by the company. “Two years may seem like it will give people enough time, but it comes at you pretty quickly.”

Some analysts agree, and advise their IT clients that use both Windows Server 2008 R2 and Windows Server 2012 R2 to formulate migration plans to Windows Server 2016 and associated budget adjustments before the end of this year.

“With the current security landscape, you don’t want to be on an unsupported server OS,” said Jim Gaynor, research vice president at Directions on Microsoft. “The specter of being unpatched works in Microsoft’s favor these days.”

Windows Server 2012 R2 users are on the clock as well, as the product exits mainstream support in October 2018. While there’s another five years of extended support for Server 2012 following that, Microsoft seems to favor no new features in a product after its mainstream support ends.

“Look at what’s happening in 2020 with connectivity to Office 365 services,” Gaynor said. “We have no indication, official or otherwise, that similar policies will come to [Windows] Server, but it’s fairly obvious that Microsoft doesn’t want customers sitting on the same version of Server for years.”

It’s unclear how many Windows Server 2008 R2 users will leapfrog Windows Server 2012 to upgrade Windows Server 2016, but it’s likely a small fraction. For those who do, however, costs associated with additional server hardware and IT training would prove to be a worthwhile investment, according to one survey respondent.

Next Steps

Prepare for the end of support for Windows Server 2008 and R2

Using Diskpart to create, extend or delete a disk partition

How have you used Diskpart in your Windows environment?

Diskpart is a disk management tool designed to create, delete and resize hard drive partitions, and assign or reassign drive letters in Windows client and server operating systems.

Posted by:

Tim Fenner,


For basic disk operations in Windows Server, administrators can use the Disk Partition Utility, or Diskpart, a command-line interpreter designed as a disk management tool.

Administrators can use Diskpart to scan for newly added disks, but it can also create, delete and resize hard drive partitions, and assign or reassign drive letters.

Note: Any text in parentheses are comments only; they should not be typed along with any commands given.

Creating a partition using Diskpart

Using Diskpart to partition your disk is very beneficial for increasing the I/O performance of hard disks newly added to a RAID array. The documentation for many server applications, such as Microsoft Exchange Server, actually goes so far as to recommend that you should use Diskpart to create your primary or extended partitions. A primary partition can be used as the system partition; an extended partition can only be used for additional logical drive assignments.

To create a partition:

  1. At a command prompt, type: Diskpart.exe
  2. At the DISKPART prompt, type: LIST DISK (Lists disks found. Make note of the drive number you wish to manipulate.)
  3. At the DISKPART prompt, type: Select Disk 1 (This selects the disk; make sure to type in the disk number from step two.)
  4. At the DISKPART prompt, type: CREATE PARTITION PRIMARY SIZE=10000
    (Change the word PRIMARY to EXTENDED to create an extended partition. If you do not set a size — in megabytes — such as the above example for 10 GB, then all available space on the disk will be used for the partition. Seriously consider adding the following option to the end of the above command if you are using RAID — especially RAID 5 — to improve disk I/O performance: ALIGN=64.)
  5. At the DISKPART prompt, type: ASSIGN LETTER=D (Choose a drive letter not already being used.)
  6. At the DISKPART prompt, type: Exit
  7. Use the Command Prompt format command, Disk Administrator or any disk format utility to format the drive — typically using NTFS, of course.

Extending a partition using Diskpart 

When it comes to adding space to a partition or volume, this method is superior to configuring dynamic disks. Dynamic disk extensions only concatenate the newly added space, meaning they merely add the disk space to the end of the original partition without restriping the data.

Concatenation isolates performance within each partition and does not offer fault tolerance when the partition is configured in a RAID array. Diskpart allows you to restripe your existing data. This is truly beneficial when the partition is set up in a RAID array, because the existing partition data is spread out across all the drives in the array, rather than just adding new space to the end, like Disk Administrator.

Extend a volume using Diskpart
Extend a volume using Diskpart.

Microsoft’s official position is you cannot use Diskpart to extend your system or boot partition. However, this tip on increasing the capacity of your system volume suggests otherwise.

Note: If you try it or any other method, make sure you have a full backup.

To extend a partition:

  1. Verify that contiguous free space is available on the same drive and that free space is next to the partition you intend on extending, with no partitions in between.
  2. At a command prompt, type: Diskpart.exe
  3. At the DISKPART prompt, type: Select Disk 1 (Selects the disk.)
  4. At the DISKPART prompt, type: Select Volume 1 (Selects the volume.)
  5. At the DISKPART prompt, type: Extend Size=10000 (If you do not set a size, such as the above example for 10 GB, then all available space on the disk will be used.)
  6. At the DISKPART prompt, type: Exit

Note: It is not necessary, but I normally reboot the server to make sure all is well from a startup standpoint.

Deleting a partition using Diskpart

Note: You cannot delete an active system or boot partition, or a partition with an active page file.

  1. At a command prompt, type: Diskpart.exe
  2. At the DISKPART prompt, type: Select Disk 1
  3. At the DISKPART prompt, type: Select Partition 1
  4. At the DISKPART prompt, type: DELETE partition
  5. At the DISKPART prompt, type: Exit

Wiping a disk using Diskpart

This operation deletes all data on the disk.

  1. At a command prompt, type: Diskpart.exe
  2. At the DISKPART prompt, type: Select Disk 1
  3. At the DISKPART prompt, type: CLEAN ALL (The CLEAN ALL command removes all partition and volume information from the hard drive being focused on.)
  4. At the DISKPART prompt, type: Exit

Final note: Here are four important things to keep in mind regarding Diskpart.

  • Do not use DISKPART until you have fully backed up the hard disk you are manipulating.
  • Exercise extreme caution when using DISKPART on dynamic disks.
  • Check with your disk vendor before using Diskpart.
  • Install the Windows Resource Kit to get the Diskpart utility.

Next Steps

This was last published in July 2016

How to approach a disk fault under Windows Server 2012 R2

What are some of the procedures you follow when repairing a disk fault?

Windows administrators have several options to identify a faulty disk in the array, but should heed best practices when replacing the drive.

Stephen J. Bigelow

Senior Technology Editor

Although the actual disk fault management process will vary between organizations, depending on the policies, tools and personnel expertise available, there are some common elements of the disk replacement process that Windows administrators can follow.

First, you need to identify the faulty disk.

Windows Server 2012 R2 provides several resources for disk fault and identification data including Event Viewer logs, through the Physical Disks report in Server Manager, through an alerts dialog in System Center Operations Manager (SCOM) or through Windows PowerShell queries. Where tools such as SCOM can report the specific location of a disk fault — slot, tray and position — other tools report a disk failure as a physical disk number or globally unique identifier (GUID). GUIDs can be translated into physical disk numbers using PowerShell Get-PhysicalDisk commands.

After determining which disk has failed, find it in the storage array enclosure. Many storage arrays provide LEDs that blink when a corresponding disk fails. If not, technicians will need extra time to find the correct physical disk or serial number.

Next, many technicians will first check the disk connections by attempting to reseat the troubled disk in its slot or cable connections. If this works, clear the blinking LED by resetting the physical disk use or removing the disk from the storage pool through a PowerShell PhysicalDisk command. If disk problems persist, replace the disk using the instructions for the particular storage array. Typical best practice states the new disk’s characteristics should match the failed disk to prevent performance mismatches that might cause storage problems later. Replace the physical disk before removing the disk from any storage pool configuration. Give the new disk a chance to rebuild otherwise there may be data loss.

Make sure that each identical disk in the group or array is using the same firmware version. Once the new disk is in place, update its firmware to the latest accepted version used on other disks in the group or greater array. Remember that each new firmware version can introduce changes in timing and access. While this should improve the disk itself, firmware version differences can also introduce performance differences that might trigger unexpected or intermittent storage errors. Tools such as Server Manager or Windows PowerShell can report on disk firmware versions, and updates should follow the disk manufacturer’s instructions.

At this point, use Server Manager or Windows PowerShell to add the new physical disk to the storage pool, and then retire and remove the old disk from the storage pool. In the event of a complete disk failure, the failed disk should have been retired automatically. If the disk is being replaced pre-emptively — such as in response to intermittent problems — retire the disk first through PowerShell.

As a final step in disk fault management, technicians can run a storage health test to verify the storage pool or cluster, and then dismiss any alerts.

Next Steps

This was last published in November 2015

Edge computing architecture helps IT support augmented reality

Augmented reality benefits greatly from reduced latency, which makes edge computing a perfect partner in the data center.

Erica Mixon

Site Editor

LAS VEGAS — Organizations are finding real use cases for augmented reality, but IT infrastructure admins must first implement technologies that can help support it.

Edge computing, which processes data close to its source to reduce latency, is one way to support augmented reality (AR). IT can implement edge computing architecture by deploying edge servers, adopting micro data centers or both. Here at Gartner’s IT Infrastructure, Operations and Cloud Strategies Conference, admins discussed how AR and edge computing work together.

“A lot of [AR] has to do with visual data processing,” said Christopher Hadley, a compute architect at Georgia-Pacific, a paper manufacturer based in Atlanta. “You don’t want to have a lag on that sort of stuff, so putting the compute closer to where your [devices] are, the better.”

AR use cases

Manufacturing and retail are two major verticals that could benefit from AR. Georgia-Pacific is working on edge and AR initiatives within its manufacturing facilities, many of which do not allow humans in them, Hadley said.

With AR, Georgia-Pacific can use lesser-skilled workers to perform maintenance tasks in the facilities that do allow humans, rather than having an engineer on site at all times. Instead, an engineer works remotely using an augmented reality platform to instruct that worker how to perform more complicated tasks.

One consumer goods manufacturer also considered AR capabilities within manufacturing plants, according to a director of computing services at the company, who requested anonymity because they were not authorized to speak to the media. The organization has 50 manufacturing sites, many of which aren’t located in areas where they can get a seasoned engineer on site.

“If we’re running an [assembly] line and that line isn’t running as we expect, it’s a lot easier to have somebody with AR go through and be able to fix that in real time,” the director said.

There are also use cases for AR within the data center itself, but that is more of a concept than a reality today, said Jeffrey Hewitt, a research vice president at Gartner. Data center infrastructure management (DCIM) tools already provide a visualization component to help admins with facilities management capabilities, such as cable management, server identification and airflow management. DCIM vendors will likely tap into AR capabilities in the future, Hewitt said.

Edge computing architecture and AR options

If you’re going to have an AR capability, the factor to make that more challenging — or even keep it from working — would be latency. AR has to happen rapidly. It can’t have delays. Jeffrey Hewitta research vice president at Gartner

There are a variety of challenges around the implementation of edge computing architecture and AR, because the market is still in its incubatory phase. That means IT needs to find creative methods for deployment.

“If you’re going to have an AR capability, the factor to make that more challenging — or even keep it from working — would be latency,” Hewitt said. “AR has to happen rapidly. It can’t have delays.”

Georgia-Pacific deploys micro data centers and is looking at hardened, thick edge devices — essentially virtual machines in a micro data center’s virtual server farms — to achieve an edge computing infrastructure, Hadley said.

The process for choosing edge devices is based on an organization’s particular use case. Georgia-Pacific, for example, wants to put cameras and sensors in its outdoor woodyards to gauge information such as water levels and the volume of raw material. Outdoor woodyards, as well as the company’s manufacturing plants that face high temperatures, humidity and vibration, are considered harsh environments.

“We’re looking at some … very hardened edge servers,” Hadley said. “They’re completely self-contained, waterproof and don’t require fans or a lot of power. We want to get a bunch of those and spread them out to where the loads are.”

Vendors such as Dell EMC, Logic Supply and Hewlett Packard Enterprise offer these types of products in the form of ruggedized servers powered by Intel Xeon.

The consumer goods manufacturer deploys micro data centers in a DIY configuration at its manufacturing plants, and it is considering using Microsoft HoloLens for AR, the director of computing services said.

The organization uses Infosys, a managed services provider, and wanted the plants to have the same look and feel as the main data center. Like the primary data center, the micro data centers are VMware environments that run Cisco and NetApp’s FlexPod converged infrastructure.

“We’ve taken what we run in our data centers … and made a miniature size of that for our manufacturing sites,” he said. “It’s technologies that [our managed services provider] already knows. We needed to ensure that they would support it.”

This was last published in December 2018

Are hypervisor tools products or features?

Did your hypervisor come packaged with other software or did you buy it as a stand-alone product?

As converged and hyper-converged infrastructures grow in popularity, hypervisors function more like features of overall IT architectures than they do as stand-alone products.

Scott D. Lowe

ActualTech Media

You still have a choice about which company provides your hypervisor software, but there’s no longer a question as to whether you need one; you do. In almost all ways, the hypervisor shifted from being a stand-alone product that revolutionized IT to a feature on which other products depend.

In the early 2000s, a brand new product hit the market. Borrowing concepts from all the eras of computing that came before, VMware was the first company the really cracked the x86-based virtualization code to create the market for hypervisor tools. That innovation led to a massive restructuring of the data center and reshaped entire markets as legacy tools began to struggle to maintain currency in a swiftly changing enterprise IT landscape.

On the surface, today’s data center is still recognizable to a 1990s-era admin, but the mechanics of modern servers, storage and networking are completely different. This dramatic shift is due primarily to virtualization taking root as a core capability across the resource spectrum.

This shift, from the early 2000s to now, has also changed the perception of the role of hypervisor tools, the layer provided by products such as VMware vSphere, Microsoft’s Hyper-V and open source KVM. As transformational as the hypervisor has been, it’s hardly a surprise that other companies would also provide such a product. For years, companies released new hypervisor product versions, adding new capabilities and enabling the virtual machines that run atop the hypervisors to scale to new heights with each new release.

In an era when the hypervisor was actively changing, constantly evolving and still coming into its own, the fact that it was a stand-alone product made sense. In fact, an entire virtualization ecosystem sprung up around hypervisor tools, eventually leading to IT innovations such as hyper-converged infrastructure.

The thrill is gone

Eventually, the hypervisor tools market became less exciting. Although vendors that created hypervisors continued to bolster those products with new capabilities, they could only take such a product so far. In fact, the leading hypervisors on the market aren’t that different under the hood. They all provide the same kinds of capabilities. The differentiation among the offerings is now driven by the products that surround the hypervisor.

VMware realized that vSphere was quickly becoming a commodity, and products from Microsoft — and even the open source community — could ultimately eclipse the hypervisor giant with good enough cheaper products. So VMware continued its efforts to expand into other areas of the data center. Today, we see the fruits of that labor: vSAN, NSX and the various cloud management and orchestration tools the company produces.

Meanwhile, companies such as Nutanix, Scale Computing, Cloudistics and Stratoscale build their products around the open source KVM hypervisor tools, and modify them to meet their platforms’ needs.

Product vs. feature

As the hypervisor matured and penetrated more deeply into organizations, it became an expected feature in infrastructure products, particularly HCI systems. Hypervisors are often sold as a product, but they are absolutely a feature on which other products rely. The hypervisor is a tightly integrated component that is a part of vendors’ larger platform visions.

Although the overall NSX vision is to add support for other hypervisors in addition to vSphere, products such as vSAN and NSX are geared toward VMware shops. These products may be considered add-ons for vSphere, but they are also stand-alone offerings. If you run Hyper-V today and want vSAN, you must shift to vSphere for that to happen. In many ways, VMware’s complementary offerings have commoditized and feature-ized vSphere.

This is not a negative. The more new products that companies such as VMware, Nutanix, Scale Computing and others can tie to their distinct hypervisor software choice, the more revenue they can generate and the more closely they can link customers to an ongoing relationship with their own hypervisors.

This was last published in December 2017

Test your Storage Spaces Direct hyper-converged system smarts

How would your organization use the hyper-converged Storage Spaces Direct features on Windows Server 2016?

Windows Server 2016 took Microsoft into the HCI game, with hyper-convergence features based on Storage Spaces Direct, the upgrade from Windows Server 2012’s Storage Spaces feature.

Rodney Brown

Senior Site Editor

When Microsoft released Windows Server 2016, it upgraded Storage Spaces, resulting in Storage Spaces Direct hyper-converged infrastructure features that brought the server software giant into the HCI market.

Because Microsoft doesn’t sell HCI appliances, using Storage Spaces Direct and its new features to implement an HCI system is more like a DIY project for an organization already running a Windows Server-based data center. That means there are plenty of options and features to choose to implement or not, depending on what you are asking your HCI system to do.

Some options are more suited to a converged infrastructure than HCI, and Windows Server 2016 can do that as well. And while Microsoft is working with partners to make sure their hardware is compatible with Storage Spaces Direct and its features, the HCI software is still all Microsoft’s — specifically, Windows Server 2016.

To prepare for a planned Storage Spaces Direct hyper-converged infrastructure deployment, take our quiz, and test your knowledge.


In the converged implementation of Storage Spaces Direct, where do the virtual machines (VMs) reside?

  • Network-attached servers
  • Hyper-V servers
  • Scale-out servers
  • Azure cloud service


Which edition of Windows Server 2016 offers the Storage Spaces Direct hyper-converged features?

  • Windows Server 2016 Cloud
  • Windows Server 2016 Datacenter
  • Windows Server 2016 Home
  • Windows Server 2016 Standard


What does Microsoft call its program for validating products and designs with technology partners in support of software-defined data centers?

  • Windows Server Software Defined
  • Microsoft Virtual Datacenter
  • Windows Server Software Defined Datacenter
  • Windows Server Virtual Datacenter Reference Architecture


What is the name of Microsoft’s in-development, web-based remote server management tool that can be used to manage a Storage Spaces Direct hyper-converged system?

  • Project Redmond
  • Project Anchorage
  • Project Cambridge
  • Project Honolulu


A Windows Server 2016 Storage Spaces Direct hyper-converged system can support nonvolatile memory express (NVMe) SSD drives.

  • True
  • False


Using remote direct memory access (RDMA)-capable adapters, Storage Spaces Direct creates what with idle Intel Xeon processor cores?

  • A compute pool
  • A software-defined networking switch
  • A VM host
  • A virtual GPU


What is required to separate an HCI from a converged infrastructure in a Storage Spaces Direct implementation?

  • Every Windows Server node must run Hyper-V
  • Uses converged Ethernet based on Server Message Block 3.0 (SMB3)
  • Includes data reduction and native replication functions
  • Uses the Cluster Shared Volumes file system

This was last published in May 2018

Do I need an information systems management degree?

Has your degree or your on-the-job experience been more valuable in your job seeking endeavors, and why?

IS management degrees are table stakes for many IT job seekers. Hands-on experience and specific certifications make applicants more desirable.

Stephen J. Bigelow

Senior Technology Editor

The importance and value of a formal degree, such as an information systems management degree, has become a matter of debate for modern businesses and IT staff. The short answer is still yes; a degree is a good launching point for a career in IT.

Job candidates who possess a formal degree in information systems management can demonstrate the successful completion of a prescribed course of study that forms the foundation of a professional career that spans a wide range of potential task areas.

An information systems management degree program includes coursework such as the organizational use of information systems (IS), basics of IS, ethics in IT, software and hardware infrastructure concepts, database concepts, enterprise IT architecture, IS project management, systems analysis, business continuity planning, trends and applications in IS, and more. This offers a broad set of basic knowledge that can lead to opportunities in varied IT roles.

For example, IT job seekers with an IS management degree can become involved in systems administration, network administration, systems and network security, developing and deploying data center infrastructure, and working with data, including big data analytics. Information systems management degrees are often related to job titles such as information systems manager, systems analyst, application analyst, data analyst/scientist, database administrator, IT technical support officer and systems developer.

But a formal degree — especially at the associate and bachelor’s degree levels — is just a launching pad. For employers and job seekers alike, knowing the basics isn’t enough to be successful. It is virtually impossible for a Bachelor of Science degree alone to adequately prepare a candidate to step into a role that involves the many complex technologies and services that modern businesses use in day-to-day operations. That takes practical work experience.

In practice, an information systems management degree is table stakes for any candidate seeking an entry-level IT position. But many of the routine tasks expected of that job candidate are learned on the job, such as using Microsoft System Center Operations Manager to administer a set of Windows Server 2016 systems according to the organization’s established policies and practices.

As tools, frameworks, systems, and even policies and practices are upgraded and replaced over time, IT professionals must continue to advance their knowledge through practical work experience and continuing education.

It is this persistent demand for continuing IT education that has gradually led businesses to de-emphasize the role of traditional college degrees in favor of more industry-focused and vendor-specific certifications.

An information systems management degree will get a candidate in the door, but it is unlikely that an IT professional will advance that degree as a job priority. Instead, the IT employee will pursue one or more relevant industry certifications, such as a Cisco Certified Network Associate (CCNA) variant, such as CCNA Data Center; a Microsoft Certified Solutions Expert (MCSE) variant, such as MCSE: Cloud Platform and Infrastructure; or countless other potential certifications.

Industry certifications enable IT professionals to tailor their expertise to meet the requirements of specific employers. And unlike a degree — which is a lifetime credential — IT certifications are typically renewed every few years to ensure current competence.

This was last published in April 2018