Cloud backup vs. NAS: A comparison on critical factors

When it comes to cloud backup vs. NAS, which considerations are most important to your organization’s needs?

Undecided about which type of backup technology is the best choice for your organization? Here’s a look at the pros and cons of today’s two most popular approaches.

John Edwards

Freelancer – SearchStorage

Published: 10 Jul 2019

One critical choice a growing number of IT and storage managers face is whether to keep their backups local with network-attached storage or to send them into the cloud. But before getting into the finer points of cloud backup vs. NAS, it’s helpful to establish the meaning of each.

Quick review of cloud backup and NAS

Storage is closely connected to both cloud backup and NAS. As to the first, cloud backup and cloud storage both involve copying data to the cloud, but storage has a wider purpose. It involves maintaining, managing and backing up data, and making it available to users. Cloud backup, on the other hand, is typically meant to serve as a data protection strategy.

In contrast, NAS is a local storage technology that connects to a network rather than a PC or server. The approach allows multiple computers and mobile devices on a network to wirelessly share and access files. NAS can also be used for backup.

Benefits and disadvantages of NAS vs. cloud backup

Each technology presents both benefits and drawbacks, so it’s important to research both approaches thoroughly before making a final choice. “The informed decision would be comparing costs, needs and ease of use,” said Kate Donofrio, technical lead and manager at compliance and certification firm Schellman & Company.

Key factors to consider include recovery point and time objectives, retention periods, data visibility, durability requirements and available budgets and resources. “These requirements set the foundation for any solution, whether it be an on-premises, hybrid or entirely cloud-based backup storage approach,” explained Eric Brooks, a principal architect in the cloud practice of IT infrastructure provider Logicalis. “They also enable an organization to analyze what solutions they have in place, pinpoint the areas where there are critical gaps and prioritize where best to make investments in different technology areas.

“Both [cloud backup and NAS] have a place in a comprehensive backup approach.

Scott Morley

Principal application architect, OneNeck IT Solutions

The choice between NAS and cloud-based backup should never be viewed as an either-or decision. “Both have a place in a comprehensive backup approach, and the business requirements around recovery time objective [RTO], recovery point objective [RPO] and long-term retention should drive the decision on when to use which,” said Scott Morley, principal application architect at IT services firm OneNeck IT Solutions.

Performance and speed

Both NAS and cloud-based backup can offer solid data protection. If a faster backup is desired, with a primary goal of protecting incremental changes, then a NAS-based approach might be better, advised Krishna Subramanian, COO and co-founder of Komprise, a data storage management software provider. “If a full off-site backup is needed, and restoration performance is not particularly critical, then a cloud-based solution might be a better fit,” Subramanian said.

NAS systems that are collocated on the same LAN as the devices being backed up have a clear speed advantage, said Dan Tucker, vice president and leader of the digital platform capability team at IT consulting firm Booz Allen Hamilton.

Cloud is limited by internet speed. While it’s possible to purchase bandwidth that surpasses LAN speeds, the cost is beyond what most organizations are willing to pay for backup storage, said George Mateaki, a security analyst at IT security service provider SecurityMetrics. Ultimately, NAS should support faster speeds due to the widespread use of high-speed LANs. Therefore, a key deciding factor between NAS and the cloud for many organizations is how quickly they will be able to back up and retrieve their data via a LAN versus over the internet.

However, Tucker pointed out another consideration. “As enterprises move into the public cloud, a cloud-based backup provided by the cloud service provider is going to have better performance … than travelling over the WAN back to the corporate NAS.”

When choosing between cloud and NAS, it’s important to consider factors such as cost, security and accessibility.

When all factors are considered, on-premises NAS devices are less sensitive to network latency than cloud services and are, therefore, generally faster, especially for full backups. “However, both approaches can be sped up significantly by archiving first,” Subramanian said.

For geographically distributed organizations, cloud-based backups can offer superior performance for snapshot-based backups. “If a catastrophic data loss occurred, the restore would be limited to the user’s internet bandwidth,” Tucker noted.

An important concern with NAS backups is that the file count can grow very large very quickly. “A petabyte of data is typically several hundred million files, and this can cause long backup windows, large backup sizes and inconsistencies,” Subramanian warned. On the other hand, she believes that more than 75% of NAS data is cold and has not been accessed or modified in over a year, and archiving such data before starting a backup can significantly speed up backup windows and reduce storage space.


When installed and configured properly, NAS and cloud backups offer generally equal levels of data security, both at rest and in transit. When evaluating NAS technologies, only storage products with self-encrypting disk storage should be considered, cloud architect Brooks advised. “The ability to provide transparent encryption at the disk level ensures that no data can be retrieved from the physical disk media should it be removed,” he said. Cloud backup providers should also guarantee their use of encryption technology.

Encryption is always necessary, regardless of the approach, said Jesse Antosiewicz, senior director of IT market applications at Liberty Mutual Insurance. “It’s the implementation and ability to sustain operations over time that are the differentiators.”


The ultimate cost of any backup approach depends on numerous factors, including equipment, service levels and backup size and frequency. Upfront NAS costs can be staggering for many organizations. “The initial purchase of the storage array as well as three or more years of support can be costly and will likely expand as additional capacity is purchased,” Brooks said.

Cloud backup entails sending a copy of data over the internet to a remote cloud-based server.

Countering NAS’s stiff upfront costs, cloud backup services offer a more easily digestible pay-for-use model. Cloud-based is typically cheaper overall, particularly when labor is factored in. “However, if you have to perform large scale restorations, the outbound bandwidth from most cloud providers is much more expensive than the inbound, which is often free,” Tucker said. “When you start adding up AWS network costs and restore costs, things can get pricy fairly quickly.”

Typically, cloud storage that’s used to back up enterprise data center archives of public cloud workloads is more cost-effective than buying an on-premises NAS system, Antosiewicz noted. “In the case of private data center workloads, this is subject to how often you need to recover data on premises, as egress charges can quickly evaporate any gained savings,” he said.


Reliability is influenced by a number of factors, and both NAS and cloud have their benefits.

“The NAS device is the most familiar to backup and storage administrators and operates as a self-contained unit to manage,” Brooks stated. “It has a lower dependency profile as compared to the hybrid and storage gateway architectures because it eliminates many of the upstream dependencies on connectivity and storage service availability.”

Meanwhile, overall data reliability tends to be much greater for cloud backup, since it can be configured for geo-redundancy. “NAS solutions keep backups on premises and would require an off-site service to ensure survival in the case of disaster,” application architect Morley said. “Most major cloud providers offer a five-nines-plus SLA [service-level agreement] on storage and will keep multiple copies of data, even without geo-redundancy, making the chance of storage corruption almost nonexistent.”


Easy, secure access is important to any successful backup system. “Backup needs to be done with the premise that the data can be recovered,” said Adrian Moir, lead technology evangelist for IT management software provider Quest Software. “Access needs to be restricted, but in such a way that a successful recovery of data can be made.”

NAS systems are generally more accessible than their cloud counterparts under routine operating conditions simply because they are positioned on site. Yet NAS backups may be unreachable — perhaps permanently — in the aftermath of a fire or other calamity. “In the case of a local disaster, a cloud solution will have the benefit of being almost immediately available at any location with an internet connection,” Morley said.

NAS is a local storage technology that connects to a network rather than a PC or server, making it easy for distributed work environments to access files and folders from any device connected to the network.

Control and ownership

Subramanian noted that NAS-based approaches generally provide the greatest amount of control, since all of the data is stored on premises. Cloud-based backups, on the other hand, usually provide acceptable access flexibility as well as good control, as long as the exported data is fully secured ahead of time.

A NAS system will generally come with a guarantee of control and ownership, although version upgrades can sometimes make existing backups unusable or need conversion. “Cloud backup solutions are generally provided by companies that guarantee access and ownership of data, but any contract should be reviewed fully to ensure there is no contention of ownership,” Morley said.

Making the cloud backup vs. NAS decision

When evaluating whether to follow a NAS or cloud-based backup approach, it’s important to fully understand current backup and recovery business requirements. “Things seemingly as simple as RPOs and RTOs can play a huge part in the decision,” said David Byte, a senior technology specialist at open source software provider SUSE.

When considering cloud backup, thoroughly examine several providers and have agreements in place for how to regain data possession when it becomes necessary to change providers or if the current cloud provider goes out of business. “Backup data can be sizable, and since there are many compliance and legal requirements for the retention of data, one must be comfortable with all terms,” Schellman & Company’s Donofrio observed. “The loss of backups due to cloud providers going out of business can also be a compliance nightmare,” she noted.

There are pros and cons for each option, based on an organization’s overall data protection plans and business needs. “Look for a partner who can help you understand those requirements and provide the optimal solution for your environment,” Logicalis’ Brooks suggested.

5 multi-cloud use cases for better storage

How is your organization using multi-cloud technology to enhance its storage strategy?

Use cases for multi-cloud storage proliferate as the technology becomes mainstream. Consider the technology for backup, resiliency, compliance, AI and software development.

John Edwards

Freelancer – SearchStorage

Multi-cloud storage is gaining momentum, with a growing number of adopters coming to appreciate the technology’s cost, flexibility, adaptability and security attributes.

The number of multi-cloud use cases aimed at storage is expanding rapidly. Take a look at the following five ways organizations can use a multi-cloud environment to enhance their storage infrastructures.

Data backup and archiving

Among the most common multi-cloud use cases is data backup and archiving. Multi-cloud storage makes backup and archiving cheaper, easier and more reliable. It enables the replication of data off site, reducing Capex and Opex related to tape storage, expanding retention periods and improving recovery point objectives, noted Robert Illing, enterprise architect for IT transformation at hardware and software reseller SHI International, based in Somerset, N.J.

“Thanks to user-friendly cloud interfaces, moving data across clouds and making multiple copies of data sets in one click results in the reduced investments in development resources,” said Andrei Lipnitski, information and communications technology department manager at ScienceSoft, an IT consulting and software development company, based in McKinney, Texas.


Another multi-cloud use case for storage is risk mitigation.

“Having data in different locations, especially in terms of primary and secondary data, means that the customer doesn’t have all his eggs in one basket,” explained Charles Foley, senior vice president of Talon Storage, a firm specializing enterprise-class file sharing, based in Mount Laurel, N.J.

Resiliency is a natural benefit of multi-cloud storage, said Geoff Tudor, vice president and general manager of cloud data services at Panzura, a firm specializing in unstructured data management in the cloud, based in Campbell, Calif. Human error is at the heart of many cloud storage outages, he said.

“Spreading data across two cloud storage providers greatly reduces exposure from these types of outages,” he added.

A multi-cloud storage strategy is essential to unify data storage for applications running in different clouds, whether they’re public or private clouds.

“It allows data to be abstracted away from specific cloud applications and stored in a common shared pool,” said Sazzala Reddy, CTO and co-founder of unified hybrid cloud computing and data management company Datrium, based in Sunnyvale, Calif.

A multi-cloud approach can also prevent data fragmentation, reduce data duplication per app and improve data governance, Reddy said. Such benefits can be handy when there’s an application running across clouds that needs to access shared data, particularly when the data is sensitive and stored in a private cloud.

“This allows companies to get the best of both public and private cloud offerings, choosing what best fits their needs and being able to adjust as those needs and their data changes,” Reddy explained.

Multi-cloud storage also keeps organizations from relying on a single cloud storage provider. Multi-cloud adopters can ensure data resiliency and eliminate application downtime from the failure of an object storage service by having consistent replicas of objects in another cloud or location, said Joel Horwitz, senior vice president of WANdisco, a software company focused on distributed computing, based in San Ramon, Calif.


Because each hyperscale cloud provider is developing its own analytical platform, AI and machine learning data must be easily transportable across clouds.

For many enterprises, compliance is becoming an effective multi-cloud use case. Increasingly, data must reside in specific geographic areas to meet data governance and compliance regulations. For some companies, a majority of data may reside with cloud A in one country, while a subset of data that is pertinent to a division or subsidiary in a GDPR-governed country may be stored with cloud B in another country, Talon Storage’s Foley said.

For large or publicly traded companies, the requirement to use risk mitigation best practices will drive the need to take a multi-cloud storage approach.

“For global companies, the need to keep certain data assets in certain geographic domains will factor into the multi-cloud adoption equation,” Foley noted.

AI and machine learning

Because each hyperscale cloud provider is developing its own analytical platform, AI and machine learning data must be easily transportable across clouds.

“By having your data portable across multiple clouds, you can take advantage of all of these new tools to extract value from your data,” Panzura’s Tudor said.

Software development

DevOps and continuous integration and continuous delivery are among the most powerful multi-cloud use cases for storage. This approach allows the most important components of an application — the data it produces and consumes — to travel between environments based on the needs of the developers and end users, said Irshad Raihan, a director of product marketing on Red Hat’s storage team.

“When multi-cloud storage is driven by policies that assign data to specific locations, it enables more automation and scalability, ultimately enabling more efficient development,” he said.

Next Steps

This was last published in June 2019

Related Resources

Dig Deeper on Cloud storage

8 pros and cons of colocation

What colocation issues has your organization encountered, and how were they handled?

Colocation is not a silver bullet solution for everyone. Discover the advantages and disadvantages that come with allowing a third-party to manage your IT equipment.

Clive Longbottom

Independent Commentator and ITC Industry Analyst –

Colocation is the act of placing and running your IT equipment within a facility operated and managed by a third party.

Organizations can use colocation to eliminate facility maintenance, expand computing power or set up an environment for disaster recovery. The service model provides IT departments with the benefits of continuous uptime, third-party support and options for scalability. However, concerns can exist around service contracts, direct accessibility and the ability to update hardware.

Before entering into any agreement, it is worth considering the following pros and cons of colocation.

Four pros of colocation

Depending on how much space an organization needs, admins can rent out colocation space by the rack, cabinet, cage or room. This model is beneficial if managers want to test colocation services, but they can also consider the following pros of colocation.

1. The facility itself, power distribution and redundancy, cooling, security and physical connectivity are all covered. Your basic payments to the operator cover the costs of operating and maintaining the facility, along with providing sufficient clean power to operate all your equipment both from the grid and from auxiliary systems (battery and generators) if the grid power supply fails. The provision and maintenance of sufficient cooling for your equipment is included. The provider also takes responsibility for all physical security of the facility.

Most large colocation facilities have existing managed high-speed connections to the majority of connectivity providers, with some being points of presence for connectivity to public cloud providers via AWS Direct Connect and Microsoft Azure ExpressRoute, for example. However, bear in mind that usage charges for connectivity still require an extra contract and payment.

2. Third-party services may well be available at data center speeds. Large providers will have thousands of customers, many of which will be cloud service providers. Many of these will provide SaaS or functional microservices. By being in a facility that will be connected using low-latency, high-speed links to all other facilities under the facility provider’s control, you can subscribe to these services and use them as if they are on your own network. For example, Equinix Marketplace offers services from more than 9,500 other customers.

3. Scalability needs can be negotiated on an as-needed basis. A fully owned facility with just one user is difficult to size to cover needs over a period of time. Will the organization’s needs grow or shrink? If they grow, will changes in equipment density mean that the actual space required shrinks? Colocation offers a means of escaping such problems. You can rent sufficient space for your immediate needs, negotiate new contracts if you need extra space or use clauses agreed to in the contract to lower space requirements at defined times.

Eli the Computer Guy describes the services 
available at a data center colocation facility.

4. Colocation can offer much higher levels of platform availability. The facility, power, cooling, connectivity and so on are all managed by a company that exists purely on public figures around how well it manages availability for its customers. If you have a badly configured hardware environment with poorly written software on it, you will still have poor overall availability, but that is a different problem.

Four cons of colocation

Despite the benefit of constant uptime, there are tradeoffs to having a data center environment off site. Here are four cons of colocation that managers must be aware of.

1. Colocation facilities might not provide direct accessibility. This is both a pro and a con of colocation. The majority of colocation providers are very careful as to who they allow into their facilities. Therefore, there will be policies in place around only registered people with a valid trouble ticket gaining access. Equipment suppliers may often only be allowed in under special circumstances, and they may need to unpack and leave the equipment in a separate area for registered people to install. However, most providers will have areas outside of the main hall where people can use terminals to access equipment for management and maintenance purposes.

2. Cost mechanisms change depending on the provider. This can be a difficult one to deal with. The upfront costs of implementing colocation can seem expensive. But when you look at what you get, and that the price will be on a constant per-month or per-year basis, it becomes apparent how cost-effective colocation is.

However, costing mechanisms vary by provider. Some charge by the amount of energy used by your equipment, some by the amount of space rented, while others may have a more complex mix of variables. The most common charge mechanism is by the amount of floor space used.

3. Negotiating solid service-level agreements (SLAs) can be challenging. The keys to a successful colocation agreement are in the SLA. You need to ensure that your monitoring and reporting requirements are completed by the provider and that reports are available on a real-time basis. Ask what tools the provider uses and verify that your own systems work with their tools. The provider should also test its own systems on a regular basis, particularly those involved with power outage situations. Ensure that time to remediation targets are defined and measure them.

4. Vendor facilities might not age well. What feels like a state-of-the-art facility now may well be exceedingly inefficient and not fit-for-purpose in a few years’ time. Investigate how maintenance and upgrades to older facilities have been managed to gauge how the provider will sweat its assets.

Colocation can make sense for a number of organizations. However, it is not a silver bullet solution, so you need to be aware of the pros and cons of colocation to avoid surprises.

This was last published in April 2019

Dig Deeper on Colocation, hosting and outsourcing management

Drill down to basics with these server hardware terms

What server hardware would you want to learn more about?

Even with software-based data center options, it’s still important to know the physical components of a server. Check out these four terms to refresh your memory.

Jessica Lulka

Associate Site Editor – SearchDataCenter

Servers are the powerhouse behind every data center. These modular, boxy components contain all the processing power required to route and store data for every possible use case.

Depending on the size of the data center, organizations use blade, rack or tower servers so that admins can scale the number of servers depending on need, effectively maintain the hardware and easily keep them cool.

Whether a data center uses rack, blade or tower servers, the central server hardware components stay the same and help support simultaneous data processing at any scale. Here’s a quick refresher on the basic components of a server and how they help get data from point A to point B. 


This piece of server hardware is the main printed circuit board in a computing system. It functions as the central connection for all externally connected devices, and the standard design includes 6 to 14 fiberglass layers, copper connecting traces and copper planes. These components support power distribution and signal isolation for smooth operation.  

The two main motherboard types are Advanced Technology Extended (ATX) and Low Profile Extension (LPX).  ATX includes more space than older designs for I/O arrangements, expansion slots and local area network connections. The LPX motherboard has ports at the back of the system.

For smaller form factors, there are the Balance Technology Extended, Pico BTX and Mini Information Technology Extended motherboards.


This circuitry translates and executes the basic functions that occur within a computing system: fetch, decode, execute and write back. The four main elements included on the processor are the arithmetic logic unit (ALU), floating point unit (FPU), registers and cache memory.

On a more granular level, the ALU executes all logic and arithmetic commands on the operands. The FPU is designed for coprocessing numbers faster than traditional microprocessor circuitry.

The terms processor and central processing unit are often interchanged, even though the use of graphics processing units means there can sometimes be more than one processor in a server.

Random access memory

RAM is the main type of memory in a computing system because it is much faster for read/write performance than some other data storage types, and because it serves as a path between the OS, applications and hardware.

It cannot store permanent data, which is why computing systems use hard drive or cloud-based storage options. RAM is volatile and makes any data available while the system is on, but deletes the data once admins shut off the server.

RAM is built on microchips and has a limited capacity for consumer devices. An average laptop uses 8 GB. For enterprise-based systems, a hard disk can store up to 10 TB. These microchips plug into the motherboard and connect to the rest of the server hardware via a bus.

Hard disk drive

This hardware is responsible for reading, writing and positioning of the hard disk, which is one technology for data storage on server hardware. Developed at IBM in 1953, the hard disk drive has evolved over time from the size of a refrigerator to the standard 2.5-inch and 3.5-inch arrays.

hard disk drive has a collection of disk platters around a spindle within a sealed chamber. These platters can spin up to 15,000 rotations per minute, and different motor heads control the read/write heads as they transcribe and translate information to and from each platter. 

Components of a hard disk drive

Data center servers also use solid-state drives, which have no moving parts and bring the benefits of low latency and high I/O for data-intensive use cases. They are more expensive than hard disks, so organizations often use a mix of hard drive and solid-state storage within their servers.

This was last published in June 2019

Related Resources

Dig Deeper on Server hardware strategy

Data center managers avoid cloud migration risks

Many corporate IT users are flocking to the cloud, but a majority surprisingly remain reluctant to migrate their on-premises mission-critical workloads to a public cloud.

Ed Scannell

Senior Executive Editor – TechTarget – SearchWindowsServer

07 Jun 2019

Well into the cloud era, a significant number of enterprises still have trepidations about moving mission-critical applications and services to the public cloud, preferring to forgo cloud migration risks by keeping apps ensconced within their own data centers.

Heading the list of reservations corporate IT shops have is the lack of visibility, transparency and accountability of public cloud services, according to respondents to the 2019 Uptime Institute’s Annual Global Data Center Survey.

Some 52% of the nearly 1,100 respondents, which included IT managers, owners and operators of data centers, suppliers, designers and consultants, said they do not place their mission-critical workloads in public clouds nor do they have plans to, while 14% said they have placed such workloads in the public cloud and are quite happy with their respective cloud services.

Of the remaining 34%, 12% have placed their services in the public cloud but complain about the lack of visibility. The remaining 22% said they will keep their most important workloads on premises but will consider moving to the cloud if they have adequate visibility.

Cloud migration risks tip the balance

Chris Brown, Uptime Institute’s chief technology officer, said he was a bit surprised that 52% of respondents were reluctant to venture into the public cloud, but a closer look at some of the reasons for that reluctance brought a better understanding.

“Among that 52% (of respondents), there are workloads that just aren’t tailored or good fits for the cloud,” Brown said. “Also, there is a fair number of older applications that have technical issues with adapting to the cloud and there is a lot of rearchitecting associated it with it, or they don’t the budget for it,” he said.

There are some workloads that shouldn’t go to the cloud. But to have these legacy platforms and the associated RDBs sitting around collecting dust just to support a handful of aging apps doesn’t seem to work.

Dana Gardner

Principal analyst, Interarbor Solutions

Of the 34% who have gone to the public cloud or are considering it, it comes down to a matter of trust, according to Brown. For the most part, respondents in this group realize the benefits cloud can bring, but they have difficulty summoning up enough faith that service providers will live up to the uptime promised in their service-level agreements (SLAs).

These concerns over cloud migration risks appear justified. The number of data center outages this year matched last year’s number for the same period of time; although, this year, more managers reported that outages rippled across multiple data centers. Just over a third of respondents reported that outages, which typically were traced to an infrastructure problem, had a measurable impact on their business. About 10% said their most recent outages resulted in over $1 million in direct and indirect costs.

Brown added that part of the problem is many users don’t understand enough about how the cloud is structured or how their cloud availability zones are designed.

“If users see the cloud as just a black box in the sky, they can only trust their provider to give them what they need when they need it,” Brown said. “And if they have outages, they have to hope their SLAs will make them whole.”

While there is plenty of data available showing how reliable most cloud service providers are, users read about highly publicized outages that have occurred over the past few years from providers such as AWS, Google and Microsoft. Compounding that issue is the basic conservative nature of data center managers.

“From my experience, the data center industry always ventures into something very gingerly,” Brown said.

Yet another reason that holds some users back is the fear of cloud lock-in and its associated expense when they want to switch service providers.

“Everyone deals with a lot of data because storage is so cheap and every IT strategy seems to be based around data,” Brown said. “But when it comes time to pull your data out of the cloud, it can cost you a fortune.”

Cloud vendors meet hesitant users halfway

Some analysts and consultants aren’t surprised at the number of corporate users still skittish about cloud migration risks. One analyst points to “cloud-down” moves from the likes of AWS, Microsoft and Google over the past year or two that offer users the option to run their applications either in the cloud or on premises.

“AWS announced Outposts last year because they want to get more into larger enterprises,” said Judith Hurwitz, president of Hurwitz and Associates, an analyst firm in Needham, Mass. “These accounts say to AWS, ‘We like your offerings, but we really want to keep them behind the firewall.’ This is how products like Outposts, [Google’s] Anthos and [Microsoft’s] AzureStack came to be,” she said.

Uptime Institute survey takers are justifiably concerned about cloud migration risks. A report from Enterprise Strategy Group shows that 41% of companies have had to move a workload back out of the cloud, incurring downtime and costs.

While some other analysts understand the reluctance of many data centers to move to the cloud, they also believe it makes sense for them to be bolder and take advantage of the benefits the cloud offers now rather than wait.

“There are some workloads that shouldn’t go to the cloud,” said Dana Gardner, principal analyst with Interarbor Solutions LLC in Gilford, N.H. “But to have these legacy platforms and the associated RDBs (relational databases) sitting around collecting dust just to support a handful of aging apps doesn’t seem to work.”

Capacity demand in the enterprise continues to grow, according to the survey, along with cloud and co-location data centers, with workloads running across a range of platforms. While data center capacity is growing, it is decreasing as a percentage of the total capacity needed.

Dig Deeper on IT infrastructure management and planning

Rollout of 16 TB HDDs targets hyperscale data centers

The drive to 16 TB HDDs is underway. Seagate kicked it off with three new hard disk drives, and Toshiba and Western Digital are poised to follow in 2019.

Carol Sliwa

Senior News Writer – TechTarget – SearchStorage

07 Jun 2019

The high-capacity point for hard disk drives officially hit 16 TB this week with Seagate Technology’s product launch that targets hyperscale, cloud and NAS customers with rapidly expanding storage requirements.

Seagate brought out a helium-sealed, 7,200 rpm Exos 16 TB HDD for hyperscale data centers and IronWolf and IronWolf Pro 16 TB HDDs for high-capacity NAS use cases in SMBs.

Earlier this year, Toshiba forecasted its 7,200 rpm helium-based MG08 Series 16 TB HDD would become available midyear, although the company has yet to confirm a ship date. Western Digital is expected to ship 16 TB HDDs in 2019 based on conventional magnetic recording (CMR) technology.

Lowering total cost

With SSDs taking over performance use cases, HDDs are largely deployed in systems focused on capacity. Using the highest available capacity is especially important to cloud and enterprise customers with explosively growing volumes of data, as they try to minimize their storage footprint and lower costs. Helium-sealed HDDs help because they enable manufacturers to use thinner platters to pack in more data per HDD and require less power than air-filled drives.

“Time to market is extremely critical given that customers — including hyperscale/cloud customers — have limited resources available to qualify new HDD products,” John Rydning, a research vice president at IDC, noted via email.

Rydning said hyperscale/cloud customers would be first to use the 16 TB HDDs because they have the architecture and software stack to deploy them without diminishing overall system performance. The highest capacity HDDs have lower IOPS per terabyte, he noted.

Sinan Sahin, a principal product manager at Seagate, said the vendor has shipped more than 20,000 test units of its 3.5-inch 16 TB HDDs to hyperscale customers such as Tencent and Google and NAS vendors such as QNAP Systems and Synology.

Toshiba began shipping 16 TB HDDs to customers for qualification slightly after Seagate, and Western Digital has yet to do so, according to Rydning, who tracks the HDD market.

“Cloud customers generally will migrate to the highest available capacity, especially if there is a two- to three-quarter gap before the next capacity is qualified and ramped up in volume,” John Chen, a vice president at Trendfocus, wrote in an email.

Horse race for shift to 16 TB

Chen expects 14 TB CMR HDDs to ramp up in volume in the second half of this year at hyperscale companies. “And it is essentially a horse race between the three suppliers to determine if the transition to 16 TB can be pulled in earlier than the second quarter of 2020,” he added.

Seagate’s Exos schedule shows how the timeline could play out. The 7,200 rpm nearline 12 TB HDD was Seagate’s highest selling enterprise product in the first quarter. Seagate launched its 14 TB Exos HDDs late last year and this spring with only a limited set of customers because the Exos X16 development was running ahead of schedule, according to Sahin.

“We wanted to make sure that we did not have the two products in the channel at the same time,” Sahin said.

16 TB Seagate Exos X16 HDD

Seagate CEO Dave Mosley said during a recent earnings call that he expects Seagate to begin ramping to high volume this year, with the 16 TB HDDs set to become the highest revenue producer by next spring.

List pricing for Seagate’s 6 Gbps SATA-based Exos X16 HDD is $629. The IronWolf 16 TB HDD lists at $609.99, and the IronWolf Pro, which offers a higher sustained data rate, is $664.99.

Seagate’s new Exos X16, IronWolf and IronWolf Pro 16 TB HDDs use a nine-platter design to boost areal density. Chen said other manufacturers will also use a nine-disk design — and potentially even more platters in the future — for enterprise capacity-optimized nearline HDDs.

But CMR HDDs aren’t the only option for hyperscalers seeking high-capacity storage. Seagate, Toshiba and Western Digital are also working on new HDDs that use shingled magnetic recording (SMR) technology, with tracks that overlap like the shingles on a roof to increase areal density.

SMR HDD use is typically restricted to workloads that write data sequentially, such as video surveillance and the internet of things. CMR drives write data randomly across the entire disk. SMR adoption has been low because users generally have to make host-side adjustments to use the HDDs without a performance hit. But industry initiatives could start to make it easier for customers to deploy SMR HDDs in the future.

The highest capacity SMR HDD today is 15 TB. Western Digital began shipping qualification samples of its Ultrastar DC HC620 host-managed SMR HDD last October. Seagate has also sampled an enterprise SMR-based 15 TB HDD, but it hasn’t launched it commercially, according to Sahin. He said Seagate plans to make available a 17 TB SMR HDD, based on the CMR-based Exos X16, later this year. Toshiba did not respond to requests for comment on its SMR HDD plans.

Even higher HDD capacities could hit the market when manufacturers start to ship drives that use heat-assisted magnetic recording (HAMR) and microwave-assisted magnetic recording (MAMR)technologies. Sahin said Seagate expects to make available HAMR-based 20 TB HDDs in late 2020. Toshiba hasn’t specified its roadmap but outlined plans to use MAMR and explore the use of HAMR technology.

Western Digital plans to introduce “energy-assisted” 16 TB CMR HDDs and 18 TB SMR HDDs later this year, according to Mike Cordano, the company’s president and COO. Cordano claimed during the company’s most recent earnings call that the new energy-assisted HDDs would contain fewer disks and heads than competitors’ options. Western Digital late last year had said that its MAMR-based 16 TB HDD would have eight platters.

IDC’s 2018 market statistics for 2.5-inch and 3.5-inch capacity-optimized HDDs showed Seagate in the lead with 47.8% of the unit shipments. Western Digital was next at 22.4% and Toshiba trailed at 9.8%. IDC’s overall HDD unit shipment statistics for 2018 had Seagate in the lead at 40.0%, Western Digital second at 37.2% and Toshiba at 22.8%.

All three vendors make available a wide range of client and enterprise HDDs, including mission-critical enterprise drives that spin at 10,000 rpm and 15,000 rpm.

Dig Deeper on Primary storage devices

Blade servers: An introduction and overview

Blade servers add muscle to demanding workloads and virtual data centers, but they also pose some concerns, including power consumption and management complexity.

Brien Posey

Microsoft MVP – SearchDataBackup

Check out the rest of our Server Month resources.

Blade servers have become a staple in almost every data center. The typical “blade” is a stripped-down modular server that saves space by focusing on processing power and memory on each blade, while forgoing many of the traditional storage and I/O functionality typical of rack and standalone server systems. Small size and relatively low cost makes blades ideal for situations that require high physical server density, such as distributing a workload across multiple Web servers).

But high density also creates new concerns that prospective adopters should weigh before making a purchase decision. This guide outlines the most important criteria that should be examined when purchasing blade servers, reviews a blade server’s internal and external hardware, and discusses basic blade server management expectations.

Internal 2U and 4U server characteristics

Form factor. Although blade server size varies from manufacturer to manufacturer, blade servers are characterized as full height or half height. The height aspect refers to how much space a blade server occupies within a chassis.

For more Server Month resources:

Unlike a rackmount server, which is entirely self-contained, blade servers lack certain key components, such as cooling fans and power supplies. These missing components, which contribute to a blade server’s small size and lower cost, are instead contained in a dedicated blade server chassis. The chassis is a modular unit that contains blade servers and other modules. In addition to the servers, a blade server chassis might contain modular power supplies, storage modules, cooling modules (i.e., fans) and management modules.

Blade chassis design is proprietary and often specific to a provider’s modules. As such, you cannot install a Hewlett-Packard (HP) Co. server in a Dell Inc. chassis, or vice versa. Furthermore, blade server chassis won’t necessarily accommodate all blade server models that a manufacturer offers. Dell’s M1000e chassis, for example, accommodates only Dell M series blade servers. But third-party vendors sometimes offer modules that are designed to fit another vendor’s chassis. For example, Cisco Systems Inc. makes networking hardware for HP and Dell blades.

Historically, blades’ high-density design posed overheating concerns, and they could be power hogs. With such high density, a fully used chassis consumes a lot of power and produces a significant amount of heat. While there is little danger of newer blade servers overheating (assuming that sufficient cooling modules are used), proper rack design and arrangement are still necessary to prevent escalating temperatures. Organizations with multiple blade server chassis should design data centers to use hot-row/cold-row architecture, as is typical with rack servers.

Processor support. As organizations ponder a blade server purchase, they need to consider a server’s processing capabilities. Nearly all of today’s blade servers offer multiple processor sockets. Given a blade server’s small form factor, each server can usually accommodate only two to four sockets.

Most blade servers on the market use Intel Xeon processors, although the Super Micro SBA-7142G-T4 uses Advanced Micro Devices (AMD) Inc.’s Opteron 6100 series processors. In either case, blade servers rarely offer less than four cores per socket. Most blade server CPUs have six to eight cores per socket. Some AMD Opteron-based processors, such as the 6100 series used by Super Micro, have up to 32 cores.

If you require additional processing power, consider blade modules that can work cooperatively, such as the SGI Altix 450. This class of blades can distribute workloads across multiple nodes. By doing so, the SGI Altix 450 offers up to 38 processor sockets and up to 76 cores when two-core processors are installed.

Memory support. As you ponder a blade server purchase, consider how well the server can host virtual machines (VMs). In the past, blade servers were often overlooked as host servers, because they were marketed as commodity hardware rather than high-end hardware capable of sustaining a virtual data center. Today, blade server technology has caught up with data center requirements, and hosting VMs on blade servers is a realistic option. Because server virtualization is so memory-intensive, organizations typically try to purchase servers that support an enormous amount of memory.

Even with its small form factor, it is rare to find a blade server that offers less than 32 GB of memory. Many of the blade servers on the market support hundreds of gigabytes of memory, with servers like the Fujitsu Primergy BX960 S1 and the Dell PowerEdge M910 topping out at 512 GB.

As important as it is for a blade server to have sufficient memory, there are other aspects of the server’s memory that are worth considering. For example, it is a good idea to look for servers that support error-correcting code (ECC) memory. ECC memory is supported on some, but not all, blade servers. The advantage to using this type of memory is that it can correct single-bit memory errors, and it can detect double-bit memory errors. 

Drive support. Given their smaller size, blade servers have limited internal storage. Almost all the blade servers on the market allow for up to two 2.5-inch hard drives. While a server’s operating system (OS) can use these drives, they aren’t intended to store large amounts of data.

If a blade server requires access to additional storage, there are a few different options available. One option is to install storage modules within the server’s chassis. Storage modules, which are sometimes referred to as storage blades or expansion blades, can provide a blade server with additional storage. A storage module can usually accommodate six 2.5-inch SAS drives and typically includes its own storage controller. The disadvantages to using a storage module are that storage modules consume chassis space and the total amount of storage it provides is still limited.

Organizations that need to maximize chassis space for processing (or provide blade servers with more storage than can be achieved through storage modules) typically deploy external storage, such as network-attached storage or storage area network (SAN). Blade servers can accept Fibre Channel mezzanine cards, which can link a blade server to a SAN. In fact, blade servers can even boot from a SAN, rendering internal storage unnecessary.

If you do use internal storage or a storage module, verify that the server supports hot-swappable drives so that you can replace drives without taking the server offline. Even though hot-swappable drives are standard features among rackmount servers, many blade servers do not support hot-swappable drives.

Expansion slots. While traditional rackmount servers support the use of PCI Express (PCIe) and PCI eXtended (PCI-X) expansion cards, most blade servers cannot accommodate these devices. Instead, blade servers offer expansion slots that accommodate mezzanine cards, which are PCI based. Mezzanine card slots, which are sometimes referred to as fibers, are referred to by letter, where the first slot is A, the second slot is B and so on.

We refer to mezzanine slots this way because blade server design has certain limits and requires consistent slot use. If in one server, you install a Fibre Channel card in slot A, for example, every other server in the chassis is affected by that decision. You could install a Fibre Channel card into slot A on your other servers or leave slot A empty, but you cannot mix and match. You cannot, for example, place a Fibre Channel card in slot A on one server and use slot A to accommodate an Ethernet card on another server. You can, however, put a Fibre Channel card in slot A and an Ethernet card in slot B — as long as you do the same on all other servers in the chassis (or, alternatively, leave all slots empty).

External blade server characteristics

Power. Blade servers do not contain a power supply. Instead, the power supply is a modular unit that mounts in the chassis. Unlike a traditional power supply, a blade chassis power supply often requires multiple power cords, which connect to multiple 20 ampere utility feeds. This ensures that no single power feed is overloaded, and in some cases provides redundancy.

Another common design provides for multiple power supplies. For example, the HP BladeSystem C3000 enclosure supports the simultaneous use of up to eight different power supplies, which can power eight different blade servers.

Network connectivity. Blade servers almost always include 2 GB network interface cards (NICs) that are integrated into the server. However, some servers, such as the Fujitsu Primergy BX960 S1, offer 10 GB NICs instead. Unlike a rackmount server, you cannot simply plug a network cable into a blade server’s NIC. The chassis design makes it impossible to do so. Instead, NIC ports are mapped to interface modules, which provide connectivity on the back of the chassis. The interesting thing about this design is that a server’s two NIC ports are almost always routed to different interface modules for the sake of redundancy. Additional NIC ports can be added through the use of mezzanine cards.

User interface ports. The interface ports for managing blade servers are almost always built into the server chassis. Each chassis typically contains a traditional built-in keyboard, video and mouse (KVM) switch, although connecting to blade servers through an IP-based KVM may also be an option. In addition, the chassis almost always contains a DVD drive that can be used for installing software to individual blade servers. Some blade servers, such as the HP ProLiant BL280c G6, contain an internal USB port and an SD card slot, which are intended for use with hardware dongles.

Controls and indicators. Individual blade servers tend to be very limited in terms of controls and indicators. For example, the Fujitsu Primergy BX960 S1 only offers an on-off switch and an ID button. This same server has LED indicators for power, system status, LAN connection, identification and CSS.

Often the blade chassis contains additional controls and indicators. For example, some HP chassis include a built in LCD panel that allows the administrator to perform various configuration and diagnostic tasks, such as performing firmware updates. The precise number and purpose of each control or indicator will vary with each manufacturer and their blade chassis design.

Management features for 2U and 4U servers

Given that blade servers tend to be used in high-density environments, management capabilities are central. Blade servers should offer diagnostic and management capabilities at both the hardware and the software level.

Hardware-based management features. Hardware-level monitoring capabilities exist so that administrators can monitor server health regardless of the OS that is running on the server. Intelligent Platform Management Interface (IPMI) is one of the most common and is used by the Dell PowerEdge M910 and the Super Micro SBA-7142G-T4.

IPMI uses a dedicated low-bandwidth network port to communicate a server’s status to IPMI-compliant management software. Because IPMI works at the hardware level, the server can communicate its status regardless of the applications that run on the server. In fact, because IPMI works independently of the main processor, it works even if a server isn’t turned on. The IPMI hardware can do its job as long as a server is connected to a power source.

Blade servers that support IPMI 2.0 almost always include a dedicated network port within the server’s chassis that can be used for IPMI-based management. Typically, a single IPMI port services all servers within a chassis. Unlike a rack server, each server doesn’t need its own management port.

Blade servers can get away with sharing an IPMI port because of the types of management that IPMI-compliant management software can perform. Such software (running on a PC) is used to monitor things like temperature, voltage and fan speed. Some server manufacturers even include IPMI sensors that are designed to detect someone opening the server’s case. As previously mentioned, blade servers do not have their own fans or power supplies. Cooling and power units are chassis-level components.

Software-based management features. Although most servers offer hardware-level management capabilities, each server manufacturer also provides their own management software as well, although sometimes at an extra cost. Dell, for example, has the management application OpenManage, while HP provides a management console known as the HP Systems Insight Manager (SIM). Hardware management tools tend to be diagnostic in nature, while software-based tools also provide configuration capabilities. You might, for example, use a software management tool to configure a server’s storage array. As a general rule, hardware management is fairly standardized.

Multiple vendors support IPMI and baseboard management controller (BMC), which is another hardware management standard.  Some servers, such as the Dell PowerEdge M910, support both standards. Management software, on the other hand, is vendor-specific. You can’t, for example, use HP SIM to manage a Dell server. But you can use a vendor’s management software to manage different server lines from that vendor. For example, Dell OpenManage works with Dell’s M series blade servers, but you can also use it to manage Dell rack servers such as the PowerEdge R715.

Because of the proliferation of management software, server management can get complicated in large data centers. As such, some organizations try to use servers from a single manufacturer to ease the management burden. In other cases, it might be possible to adopt a third-party management tool that can support heterogeneous hardware, though the gain in heterogeneity often comes at a cost of management granularity. It’s important to review each management option carefully and select a tool that provides the desired balance of support and detail.

ABOUT THE AUTHORBrien M. Posey has received Microsoft’s Most Valuable Professional award six times for his work with Windows Server, IIS, file systems/storage and Exchange Server. He has served as CIO for a nationwide chain of hospitals and healthcare facilities and was once a network administrator for Fort Knox.

What did you think of this feature? Write to’s Nicole Harding about your data center concerns at

This was last published in April 2011-timeless!

The vSAN stretched cluster type spreads HCI love for HA, DR

How would your hyper-converged infrastructure benefit from using stretched clusters?

VMware vSAN stretched clusters enable admins to spread hyper-converged infrastructures across two physical locations. Learn more about them and their benefits.

Robert Sheldon

Contributor – SearchSQLServer

A hyper-converged infrastructure based on VMware virtualization technologies uses VMware’s vSAN to provide software-defined storage to the HCI cluster. VMware supports several types of vSAN clusters, including the stretched cluster.

Stretched clusters let administrators implement an HCI that spans two physical locations. An IT team can use a stretched cluster as part of its disaster recovery strategy or to manage planned downtime to ensure the cluster remains available and no data is lost.

In this article, we dig into the stretched cluster concept to get a better sense of what it is and how it works. But first, let’s delve a little deeper into VMware vSAN and the different types of clusters VMware’s HCI platform supports.

The vSAN cluster

An HCI provides a tightly integrated environment for delivering virtualized compute and storage resources and, to a growing degree, virtualized network resources. It’s typically made up of x86 hardware that’s optimized to support specific workloads. HCIs are known for being easier to implement and administer than traditional systems, while reducing capital and operational expenditures, when used for appropriate workloads. Administrators can centrally manage the infrastructure as a single, unified platform.

Some HCIs, such as the Dell EMC VxRail, are built on VMware virtualization technologies, including vSAN and the vSphere hypervisor. VMware has embedded vSAN directly into the hypervisor, resulting in deep integration with the entire VMware software stack.

An HCI based on vSAN is made up of multiple server nodes that form an integrated cluster, with each node having its own DAS. The vSphere hypervisor is also installed on each node, making it possible for vSAN to aggregate the cluster’s DAS devices to create a single storage pool shared by all hosts in the cluster.

VMware supports three types of clusters. The first is the standard cluster, located in a single physical site with a minimum of three nodes and maximum of 64. VMware also supports a two-node cluster for smaller implementations, but it requires a witness host to serve as a tiebreaker if the connection is lost between the two nodes.

The third type of cluster VMware vSAN supports is the stretched cluster.

The vSAN stretched cluster

A stretched cluster spans two physically separate sites and, like a two-node cluster, requires a witness host to serve as a tiebreaker. The cluster must include at least two hosts, one for each site, but it will support as many as 30 hosts across the two sites.

When VMware first introduced the stretched cluster, vSAN required hosts be evenly distributed across the two sites. As of version 6.6, vSAN supports asymmetrical configurations that allow one site to contain more hosts than the other. However, the two sites combined are still limited to 30 hosts.A stretched cluster spans two physically separate sites and, like a two-node cluster, requires a witness host to serve as a tiebreaker.

Because the vSAN cluster is fully integrated into vSphere, it can be deployed and managed just like any other cluster. The cluster provides load balancing across sites and can offer a higher level of availability than a single site. Data is replicated between the sites to avoid a single point of failure. If one site goes offline, the vSphere HA (High Availability) utility launches the virtual machines (VMs) on the other site, with minimum downtime and no data loss.

A stretched cluster is made up of three fault domains: two data sites and one witness host. A fault domain is a term that originated in earlier vSAN versions to describe VM distribution zones that support cross-rack fault tolerance. If the VMs on one rack became unavailable, they could be made available on the other rack (fault domain).

A stretched cluster works much the same way, with each site in its own fault domain. One data site is designated as the preferred site (or preferred fault domain) and the other is designated as the secondary site. The preferred site is the one that remains active if communication is lost between the two sites. Storage on the secondary site is then considered to be down and the components absent.

The witness host is a dedicated ESXi host — physical server or virtual appliance — that resides at a third site. It stores only cluster-specific metadata and doesn’t participate in the HCI storage operations, nor does it store or run any VMs. Its sole purpose is to serve as a witness to the cluster, primarily acting as a tiebreaker when network connectivity between the two sites is lost.

During normal operations, both sites are active in a stretched cluster, with each maintaining a full copy of the VM data and the witness host maintaining VM object metadata specific to the two sites. In this way, if one site fails, the other can take over and continue operations, with little disruption to services. When the cluster is fully operational, the two sites and the witness host are in constant communication to ensure the cluster is fully operational and ready to switch over to a single site should disaster occur.

A VMware vSAN stretched cluster illustrated
A stretched cluster allows admins to spread an HCI across two physical locations for disaster recovery and other high availability purposes.

The HCI-VMware mix

Administrators can use VMware vCenter Server to deploy and manage a vSAN stretched cluster, including the witness host. With vCenter, they can carry out tasks such as changing a site designation from secondary to primary or configuring a different ESXi host as the witness host. Implementing and managing a stretched cluster is much like setting up a basic cluster, except you must have the necessary infrastructure in place to support two locations.

For organizations already committed to HCIs based on VMware technologies, the stretched cluster could prove a useful tool as part of their DR strategies or planned maintenance routines. For those not committed to VMware but considering HCI, the stretched cluster could provide the incentive to go the VMware route.

This was last published in May 2019

Dig Deeper on Hyper-Converged Infrastructure Implementation

Microsoft and Oracle join forces to offer inter-cloud connectivity

Pair will provide direct connections between their clouds, enabling workloads to use services across Oracle and Azure public clouds

Cliff Saran

Managing Editor – TechTarget –

06 Jun 2019 9:49

Oracle has partnered with Microsoft to offer interoperability across their respective cloud services. The companies say the agreement will enable customers to migrate and run the same enterprise workloads across both Microsoft Azure and Oracle Cloud.

Through the partnership, the pair said enterprises would be able to connect Azure services, such as Analytics and AI, to Oracle Cloud services, including Autonomous Database. By enabling customers to run one part of a workload within Azure and another part of the same workload within the Oracle Cloud, the partnership delivers a highly optimised, best-of-both-clouds experience, say Microsoft and Oracle.

Scott Guthrie, executive vice-president of Microsoft’s cloud and AI division, said: “As the cloud of choice for the enterprise, with over 95% of the Fortune 500 using Azure, we have always been, first and foremost, focused on helping our customers thrive on their digital transformation journeys.”

Don Johnson, executive vice-president, Oracle Cloud Infrastructure (OCI), said: “Oracle and Microsoft have served enterprise customer needs for decades. With this partnership, our joint customers can migrate their entire set of existing applications to the cloud without having to re-architect anything, preserving the large investments they have already made.”

Organisations that run Oracle and Microsoft systems said they would find the partnership beneficial.

Ken Braud, senior vice-president and CIO at Halliburton, said: “This alliance gives us the flexibility and ongoing support to continue leveraging our standard architectures, while allowing us to focus on generating business outcomes that maximise returns for our shareholders.”

The partnership provides multicloud flexibility for organisations to support new business opportunities. Sally Gilligan, chief information officer at Gap, said: “As we look to bring our omnichannel experience closer together and transform the technology platform that powers the Gap brands, the collaboration between Oracle and Microsoft will make it easier for us to scale and deliver capabilities across channels.”

Read more about multiclouds

Read more on Cloud applications

Inside ‘Master134′: Ad networks’ ‘blind eye’ threatens enterprises

How should security vendors handle ad networks that are repeatedly tied to malvertising campaigns?

Online ad networks linked to the Master134 malvertising campaign and other malicious activity often evade serious fallout and continue to operate unabated.

Rob Wright

Associate Editorial Director – TechTarget – SearchSecurity

The online advertising networks implicated in the “Master134” malvertising campaign have denied any wrongdoings, but experts say their willingness to turn a blind eye to malicious activity on their platforms will likely further jeopardize enterprises.

In total, eight online ad firms — Adsterra, AdKernel, AdventureFeeds, EvoLeads, ExoClick, ExplorAds, Propeller Ads and Yeesshh — were connected to the Master134 campaign, and many of them presented similar explanations about their involvement with the malvertising campaign.

They insisted they didn’t know what was going on and when informed of the malvertising activity, they immediately intervened by suspending the publisher accounts of the malicious actors abusing their platforms. However, none of the ad networks were willing to provide names or account information of the offending clients, citing vague company privacy policies and government regulations that prevented them from doing so.

A cybersecurity vendor executive, who wished to remain anonymous, said it’s likely true that the ad networks were unaware of the Master134 campaign. However, the executive, who has worked extensively on malvertising and ad fraud-related campaigns, said that unawareness is built by design.

“They don’t necessarily know, and they don’t want to know, where the traffic is coming from and where it’s going because their businesses are based on scale,” the executive said. “In order to survive, they have to ignore what’s going on. If they look at how the sausage is made, then they’re going to have issues.”

How the Master134 campaign worked.

The use of various domains, companies, redirection stages and intermediaries make it difficult to pinpoint the source of malicious activity in malvertising schemes. Tamer Hassan, CTO and co-founder of White Ops, a security vendor focused on digital ad fraud, said complexity makes the ecosystem attractive to bad actors like malware authors and botnet operators, as well as ad networks that prefer to look the other way.

There aren’t a lot of ad companies that work directly with malware operators, but there are a lot of ad companies that don’t look at this stuff closely because they don’t want to lose money.

Tamer Hassan

CTO and co-founder, White Ops

“It’s easy to make it look like you’re doing something for security if you’re an ad network,” Hassan said. “There aren’t a lot of ad companies that work directly with malware operators, but there are a lot of ad companies that don’t look at this stuff closely because they don’t want to lose money.”

“Malware Breakdown,” an anonymous security researcher that documented early Master134 activity in 2017, offered a similar view of the situation. The researcher told SearchSecurity that because Propeller Ads’ domain was being used to redirect users to a variety of different malvertising campaigns, they believed “the ad network was being reckless or turning a blind eye to abuse.”

In some cases, Hassan said, smaller ad networks and domains are created expressly for fraud and malvertising purposes. He cited the recent Methbot and 3ve campaigns, which used several fraudulent ad networks that appeared to be legitimate companies in order to conduct business with other networks, publishers and advertisers.

“The ad networks were real, incorporated companies,” he said, “but they were purpose-built for fraud.”

Even AdKernel acknowledges the onion-like ecosystem is full of bad publishers and advertisers.

“In ad tech, the situation is exacerbated because there are many collusion players working together,” said Judy Shapiro, chief strategy advisor for AdKernel, citing bad publishers and advertisers. “Even ad networks don’t want to see impressions go down a lot because they, too, are also paid on a [cost per impression] basis by advertisers.”

There is little indication, however, that these online ad tech companies have changed how they do business.

Lessons learned?

Following the publication of the Master134 report, Check Point researchers observed some changes in activity.

Lotem Finkelsteen, Check Point Research’s threat intelligence analysis team leader and one of the contributors to the Master134 report, said there appeared to be less hijacked traffic going to the exploit kit domains, which suggested the ad networks in the second redirection stage — ExoClick, AdventureFeeds, EvoLeads, ExplorAds and Yeesshh — had either been removed from the campaign by the Master134 threat actors or had voluntarily detached themselves (Yeesshh and ExplorAds closed down the domains used in the campaign sometime in December).

But Adsterra is another story. More than six months after the report was published, Finkelsteen said, there’s been no indication the company has changed its behavior.

Meanwhile, the Master134 campaign changed somewhat in the aftermath of Check Point’s report. The threat actors behind the IP address changed the redirection paths and run traffic through other ad networks, Finkelsteen said.

Aviran Hazum, mobile threat intelligence team leader at Check Point Research, noted on Twitter in September that the campaign had a “new(ish) URL pattern” that moved hijacked traffic through suspicious redirection domains and ad networks like PopCash, a Romanian pop-under ad network that was blocked by Malwarebytes for ties to malicious activity.

AdKernel said it learned a lesson from the Master134 campaign and pledged to do more to remove bad actors from its network. However, a review of several of the domains that bear the “powered by AdKernel” moniker suggests the company hasn’t successfully steered away from suspicious ad networks or publishers.

For example, one ad network customer named AdTriage has a domain called that looks exactly like the self-service portals on the junnify and bikinisgroup sites that were also “powered” by AdKernel. AdTriage, however, doesn’t appear to be a real company — is filled with “Lorem Ipsum” dummy text. On the “About Us” page, the “Meet our team” section has nothing except text that says “Pics Here.” (WhoIs results show the domain was created in 2011, and captures of the sites from that year on Internet Archive’s Wayback Machine reveal the same dummy text.)

AdTriage’s site is filled with dummy text.

Escaping consequences

The recent history of malvertising indicates ad companies that issue denials are quite capable of moving onto the next client campaign, only to issue similar denials and reassurances for future incidents with little to no indication that their security practices have improved.

Check Point’s Master134 report, as well as earlier malvertising research from FireEye and Malwarebytes, doesn’t appear to have had much, if any, effect on the reputations of the five companies. They all appear to be in good standing with the online ad industry and have seemingly avoided any long-term consequences from being associated with malicious activity.

ExoClick and Adsterra, for example, have remained visible through sponsorships and exhibitions at industry events, including The European Summit 2018 and Mobile World Congress 2019.

Online ad companies are often given the benefit of the doubt in malvertising cases such as Master134 for two primary reasons: Ad networks are legitimate companies, not threat groups, and digital ads are easy for threat actors to take advantage of without the help or complicit knowledge of those networks.

But Check Point Research’s team has little doubt about the involvement of the ad networks in Master134; either they turned a blind eye to the obvious signs of malicious activity, Finkelsteen said, or openly embraced them to generate revenue.

Other security vendors have also publicized malvertising campaigns that redirect traffic to known exploit kits. FireEye reported in September that a malvertising campaign used the Fallout exploit kit to spread the GandCrab ransomware to victims primarily in Southeast Asia. According to the report, the malicious ads profiled users’ browsers and operating systems.

“Depending on browser/OS profiles and the location of the user, the malvertisement either delivers the exploit kit or tries to reroute the user to other social engineering campaigns,” the report stated.

It’s difficult to determine exactly how much of the online ad ecosystem has been compromised by malicious or unscrupulous actors, said Adam Kujawa, director of malware intelligence at Malwarebytes.

“Advertising is the reason the internet exists as it does today,” he said. “It’s always going to be very close to the heart of all things that happen on the internet. The reason we see so much adware is because these companies kind of … live in a gray area.”

The gray area can be even murkier on the technical side. Despite being key components in the Master134 campaign, the and URLs raise only a few alarms on public URL and file scanning services.

VirusTotal, for example, shows that all 67 malware engines used in the scans rate both domains as “clean,” though the Junnify domain did receive a -28 community score (VirusTotal community scores, which start at zero, represent the number of registered users who vote a file or URL as being safe or unsafe).

Malvertising campaigns like Master134 that use multiple traffic flows and advertising platforms could become increasingly common, according to the Check Point report.

“Due to the often complex nature of malware campaigns, and the lack of advanced technology to vet and prevent malicious adverts from being uploaded onto ad-network bidding platforms,” the researchers wrote, “it is likely we will see more malvertising continue to be a popular way for cybercriminals to gain illegal profits for many years to come.”

Rob Wright asks:

What steps does your organization take to prevent malvertising threats?

Join the Discussion

This was last published in April 2019