‘Master134’ malvertising campaign raises questions for online ad firms

Have you fallen prey to Master134 and what did you do? How should the infosec industry handle persistent malvertising threats?

Malvertising and adware schemes are a growing concern for enterprises. Our deep investigation into one campaign reveals just how complicated threats can be to stop.

Rob Wright

Associate Editorial Director – TechTarget – SearchSecurity

Why were several major online advertising firms selling traffic from compromised WordPress sites to threat actors operating some of the most dangerous exploit kits around?

That was the question at the heart of a 2018 report from Check Point Research detailing the inner workings of an extensive malvertising campaign it calls “Master134,” which implicated several online advertising companies. According to the report, titled “A Malvertising Campaign of Secrets and Lies,” a threat actor or group had compromised more than 10,000 vulnerable WordPress sites through a remote code execution vulnerability that existed on an older version of the content management system.

Malvertising is a common, persistent problem for the information security industry, thanks to the pervasiveness of digital ads on the internet. Threat actors have become adept at exploiting vulnerable technology and lax oversight in the online ad ecosystem, which allows them to use ads as a delivery mechanism for malware. As a result, many security experts recommend using ad blockers to protect endpoints from malvertising threats.

But Master134 was not a typical malvertising campaign.

A tangled web of redirects

Rather than using banner ads as a vector for malware infection, threat actors relied on a different component of the digital advertising ecosystem: web traffic redirection. In addition to serving digital ads, many ad networks buy and sell traffic, which is then redirected and used to generate impressions on publishers’ ads. These traffic purchases are made through what’s known as real-time bidding (RTB) platforms, and they are ostensibly marketed as legitimate or “real” users, though experts say a number of nefarious techniques are used to artificially boost impressions and commit ad fraud. These techniques include the use of bots, traffic hijacking and malicious redirection codes.

Threat actors never cease to look for new techniques to spread their attack campaigns, and do not hesitate to utilize legitimate means to do so.

Check Point Research’s report, ‘A Malvertising Campaign of Secrets and Lies’

According to Check Point Research, part of Check Point Software Technologies, Master134 was an unusually complex operation involving multiple ad networks, RTB platforms and traffic redirection stages. Instead of routing the hijacked WordPress traffic to malicious ads, the threat actors redirected the traffic intended for those sites to a remote server located in Ukraine with the IP address “,” hence the name Master134. (Check Point said a second, smaller source of traffic to the Master134 server was a PUP that redirected traffic intended for victims’ homepages.)

Then, the Master134 campaign redirected the WordPress traffic to domains owned by a company known as Adsterra, a Cyprus-based online ad network. Acting as a legitimate publisher, Master134 sold the WordPress traffic to Adsterra’s network to other online ad companies, namely ExoClick, EvoLeads, AdventureFeeds and AdKernel.

From there, the redirected WordPress traffic was resold a second time to threat actors operating some of the most well-known malicious sites and campaigns in recent memory, including HookAds, Seamless and Fobos. The traffic was redirected a third and final time to “some of the exploit kit land’s biggest players,” according to Check Point’s report, including the RIG and Magnitude EKs.

The researchers further noted that all of the Master134 traffic ended up in the hands of threat actors and was never purchased by legitimate advertisers. That, according to Check Point, indicated “an extensive collaboration between several malicious parties” and a “manipulation of the entire online advertising supply chain,” rather than a series of coincidences.

The redirection/infection chain of the Master134 campaign.

Why would threat actors and ad networks engage in such a complex scheme? Lotem Finkelsteen, Check Point’s threat intelligence analysis team leader and one of the contributors to the Master134 report, said the malvertising campaign was a mutually beneficial arrangement. The ad companies generate revenue off the hijacked WordPress traffic by reselling it. The Master134 threat actors, knowing the ad companies have little to no incentive to inspect the traffic, use the ad network platforms as a distribution system to match potential victims with different exploit kits and malicious domains.

“In short, it seems threat actors seeking traffic for their campaigns simply buy ad space from Master134 via several ad-networks and, in turn, Master134 indirectly sells traffic/victims to these campaigns via malvertising,” Check Point researchers wrote.

Check Point’s report was also a damning indictment of the online ad industry. “Indeed, threat actors never cease to look for new techniques to spread their attack campaigns, and do not hesitate to utilize legitimate means to do so,” the report stated. “However, when legitimate online advertising companies are found at the heart of a scheme, connecting threat actors and enabling the distribution of malicious content worldwide, we can’t help but wonder — is the online advertising industry responsible for the public’s safety?”

Other security vendors have noted that malvertising and adware schemes are evolving and becoming increasingly concerning for enterprises. Malwarebytes’ “Cybercrime Tactics and Techniques” report for Q3 2018, for example, noted that adware detections increased 15% for businesses while dropping 19% for consumers. In addition, the report noted a rise in new techniques such as adware masquerading as legitimate applications and browser extensions for ad blockers and privacy tools, among other things.

The malvertising Catch-22

The situation has left both online ad networks and security vendors in a never-ending game of whack-a-mole. Ad companies frequently find themselves scrutinized by security vendors such as Check Point in reports on malvertising campaigns. The ad companies typically deny any knowledge or direct involvement in the malicious activity while removing the offending advertisements and publishers from their networks. However, many of those same ad networks inevitably end up in later vendor reports with different threat actors and malware, issuing familiar denials and assurances.

Meanwhile, security vendors are left in a bind: If they ban the ad networks’ servers and domains in their antimalware or network security products, they effectively block all ads coming from repeat offenders, not just the malicious ones, which hurts legitimate publishers as well as the entire digital advertising ecosystem. But if vendors don’t institute such bans, they’re left smacking down each new campaign and issuing sternly worded criticisms to the ad networks.

That familiar cycle was on display with Master134; following Check Point’s publication of the report on July 30, three of the online ad companies — Adsterra, ExoClick and AdKernel — pushed back on the Check Point report and adamantly denied they were involved in the Master134 scheme (EvoLeads and AdventureFeeds did not comment publicly on the Master134 report). The companies claimed they are leading online advertising and traffic generation companies and were not directly involved in any illegitimate or malicious activity.

How the Master134 campaign worked.

Check Point revised the report on August 1 and removed all references to one of the companies, New York-based AdKernel LLC, which had argued the report contained false information. Check Point’s original report incorrectly attributed two key redirection domains — xml.bikinisgroup.com and xml.junnify.com — to the online ad company. As a result, several media outlets, including SearchSecurity, revised or updated their articles on Master134 to clarify or completely remove references to AdKernel.

But questions about the Master134 campaign remained. Who was behind the bikinisgroup and junnify domains? What was AdKernel’s role in the matter? And most importantly: How were threat actors able to coordinate substantial amounts of hijacked WordPress traffic through several different networks and layers of the online ad ecosystem and ensure that it always ended up on a select group of exploit kit sites?

A seven-month investigation into the campaign revealed patterns of suspicious activity and questionable conduct among several ad networks, including AdKernel. SearchSecurity also found information that implicates other online advertising companies, demonstrating how persistent and pervasive malvertising threats are in the internet ecosystem.1

Rob Wright asks:

How should the infosec industry handle persistent malvertising threats?

Join the Discussion

This was last published in April 2019

2019’s top 5 free enterprise network intrusion detection tools

What open source intrusion detection system do you prefer, and why?

Snort is one of the industry’s top network intrusion detection tools, but plenty of other open source alternatives are available. Discover new and old favorites for packet sniffing and more.

Peter Loshin

Site Editor – SearchSecurity

Open source and information security applications go together like peanut butter and jelly.

The transparency provided by open source in infosec applications — what they monitor and how they work — is especially important for packet sniffer and intrusion detection systems (IDSes) that monitor network traffic. It may also help explain the long-running dominance of Snort, the champion of open source enterprise network intrusion detection since 1998.

The transparency enabled by an open source license means anyone can examine the source code to see the detection methods used by packet sniffers to monitor and filter network traffic, from the OS level up to the application layer.

One problem with open source projects is that when leadership changes — or when ownership of a project moves from individuals to corporations — the projects don’t always continue to be fully free to use, or support for the open source version of the project may take a back seat to a commercial version.

For example, consider Snort, first released as an open source project in 1998. Creator Martin Roesch started Sourcefire in 2001 in a move to monetize the popular IDS. But, in the years running up to Cisco’s 2013 purchase of Sourcefire, the concern was that the company might allow the pursuit of profit to undermine development and support of the open source project. For example, Sourcefire sold a fully featured commercial version of Snort, complete with vendor support and immediate updates, a practice that has bedeviled other open source projects, as users often find the commercial entity gives the open source project short shrift to maximize profits.

Cisco has taken a different approach to the project, however. While the networking giant incorporates Snort technology in its Next-Generation Intrusion Prevention System (IPS) and Next-Generation Firewall products, Cisco “embraces the open source model and is committed to the GPL [GNU General Public License].” Cisco releases back to the open source project any feature or fixes to Snort technology incorporated in its commercial products.

What is an IDS and why is it important?

IDSes monitor network traffic and issue alerts when potentially malicious network traffic is detected. An IDS is designed to be a packet sniffer, a system able to monitor all packets sent on the organization’s network, and IDSes use a variety of techniques to identify traffic that may be part of an attack. IDSes identify suspicious network traffic using the following detection methods:

  • Network traffic signatures identify malicious traffic based on the protocols used, the source of the packets, the destination of the packet or some combination of these and other factors.
  • Blocked lists of known malicious IP addresses enable the IDS to detect packets with an IP address identified as a potential threat.
  • Anomalous network behavior patterns, similar to signatures, use information from threat intelligence feeds or authentication systems to identify network traffic that may be part of an attack.

IDSes can be host- or network-based. In a host-based IDS, software sensors are installed on endpoint hosts in order to monitor all inbound and outbound traffic, while, in a network-based IDS, the functionality is deployed in one or more servers that have connectivity to as many of the organization’s internal networks as possible.

The intrusion detection function is an important part of a defense-in-depth strategy for network security that combines active listening, strong authentication and authorization systems, perimeter defenses and integration of security systems.


Snort, long the leader among enterprise network intrusion detection and intrusion prevention tools, is well-positioned to continue its reign with continued development from the open source community and the ongoing support of its corporate parent, Cisco.

In general terms, Snort offers three fundamental functions:

  1. Snort can be used as a packet sniffer, like tcpdump or Wireshark, by setting the host’s network interface into promiscuous mode in order to monitor all network traffic on the local network interface and then write traffic to the console.
  2. Snort can log packets by writing the desired network traffic to a disk file.
  3. Snort’s most important function is to operate as a full-featured network intrusion prevention system, by applying rules to the network traffic being monitored and issuing alerts when specific types of questionable activity are detected on the network.

Security Onion

Unlike Snort, which is a self-contained application, Security Onion is a complete Linux distribution that packages a toolbox of open source applications — including Snort — that are useful for network monitoring and intrusion detection, as well as other security functions, like log management. In addition to Snort, Security Onion includes other top intrusion detection tools, like Suricata, Zeek IDS and Wazuh.

Infosec professionals can install Security Onion on a desktop to turn it into a network security monitoring workstation or install the Security Onion distribution on endpoint systems and virtual environments to turn them into security sensors for distributed network intrusion monitors.


The Wazuh project offers enterprises a security monitoring application capable of doing threat detection, integrity monitoring, incident response and compliance. While it may be seen as a newcomer, the Wazuh project was forked from the venerable OSSEC project in 2015, and it has replaced OSSEC in many cases — for example, in the Security Onion distribution.

Running as a host-based IDS, Wazuh uses both signatures and anomaly detection to identify network intrusions, as well as software misuse. It also can be used to collect, analyze and correlate network traffic data for use in compliance management and for incident response. Wazuh can be deployed in on-premises networks, as well as in cloud or hybrid computing environments.


First released in beta in 2009, Suricata has a respectable history as a Snort alternative. The platform shares architectural similarities with Snort. For example, it relies on signatures like Snort, and in many cases, it can even use the VRT Snort rules that Snort itself uses.

Like Snort, Suricata features IDS and IPS functionality, as well as support for monitoring high volumes of network traffic, automatic protocol detection, a scripting language and support for industry standard output formats. In addition, Suricata provides an engine for enterprise network security monitoring ecosystems.

Zeek IDS

The name may be unfamiliar, but the Zeek network security monitor is another mature open source IDS. The network analysis framework formerly known as Bro was renamed Zeek in 2018 to avoid negative associations with the old name, but the project is still as influential as ever.1

Peter Loshin asks:

What open source intrusion detection system do you prefer, and why?

Join the Discussion

More than a simple IDS/IPS, Zeek is a network analysis framework. While the primary focus is on network security monitoring, Zeek also offers more general network traffic analysis functionality.

Specifically, Zeek incorporates many protocol analyzers and is capable of tracking application layer state, which makes it ideal for flagging malicious or other harmful network traffic. It also offers a scripting language to enable greater flexibility and more powerful security.

This was last published in April 2019

How to improve application security testing when it falls short

What kind of testing have you done to improve your application security?

Application security testing is a critical component of enterprise security. Find out what steps you can take to make sure your testing procedures fit the bill.

Kevin Beaver

Principle Logic, LLC – SearchSecurity

Those of us working in security like to think our efforts are all we need to find vulnerabilities, contain threats and minimize business risks.

I had this mindset early on in my security career. The thought was: Go through the motions; do x, y and z; and that will serve as a solid security foundation. I quickly learned the world doesn’t work that way; action doesn’t necessarily translate into results.

Certain efforts contribute to a security program in positive ways, while others burn through time, money and effort with no return. Yet, as it relates to application security, all is not lost. You can take steps as part of your program that can yield near-immediate payoffs, boost your security efforts and minimize your business risks.

It’s easy to look at application security testing as a science — a binary set of methodologies, tests and tools that can deliver what you need when executed on a periodic basis. The problem is that it’s not true.

Without going into all the details required to run a strong application security program, let’s look at some of the common shortcomings of application security testing and discuss what you should and shouldn’t do as you move forward and improve. The following issues rank among the biggest applications security challenges.

Application security is often lumped into network security. This means application security testing is often part of more general vulnerability and penetration testing. As a result, application security doesn’t get the detailed attention it deserves.

Simply running vulnerability scans with traditional tools isn’t going to get you where you need to be. Organizations need to be running dedicated web vulnerability scanners like WebInspect and Netsparker, proxy tools like Burp Suite and the OWASP Zed Attack Proxy, and web browser plugins. This will enable you to perform the detailed testing necessary to uncover what are often critical web vulnerabilities that would have otherwise been overlooked. Simply running vulnerability scans with traditional tools isn’t going to get you where you need to be.

This issue is easy to resolve by getting all the right people involved and ensuring your testing efforts are properly scoped.

Web applications aside, mobile apps are often overlooked. I’m not sure why mobile app security is sometimes ignored. Mobile apps have been around years and often serve as a core component of a business’s online presence.

Faulty assumptions about mobile app security abound, however, among them the belief that mobile apps offer only a limited attack surface because of their finite functionality, or that the apps themselves are secure because they have been previously vetted by developers or app stores. This perspective is shortsighted, to say the least, and it can come back to haunt developers, security teams and businesses as a whole.

Abandoning web testing because sites and applications are hosted by a third party. This is similar to mobile apps not being property vetted. If you’re not doing the testing, somebody needs to — and it better be the company doing the hosting or management because I can assure you, no one else is — other than the criminal hackers continually trying to find flaws in your environment. The bad guys are probably not going to tell you about what they’ve uncovered until they have you backed into a corner, if ever.

Don’t let bystander apathy drive your application security testing. Be accountable or hold someone else accountable and review the work.

Companies that decline to perform authenticated application testing. It may be difficult to test every possible user role, but you really need to examine all the aspects of your application eventually.

In the application security testing I conduct, I often see multiple user roles with no critical flaws. But when I test one or two more roles, big vulnerabilities like SQL injection surface. An oversight like this — simply because you didn’t have the time or the budget to test everything — will likely prove indefensible. You need to think about how you’re going to respond when the going gets rough with an incident or breach. Better yet, think about how you’re going to prevent an oversight from facilitating application risks in the first place.

If you want to find and eliminate the blind spots in your application security testing, you must do the following:

  • Get the right people involved, including developers and quality assurance
  • Develop standards and policies governing application security.
  • Perform your testing on a periodic and consistent basis, repeatedly over time.
  • Keep management in the know and on your side.

A wise person once said, “Is this as good as you’re going to get, or are you going to get any better? Look at your application security testing program through this lens. Bring in an unbiased outsider if you need to.

You’re probably working in the security field because it has great payoffs — both tangible and intangible. Things change daily, and there’s always something new to discover and learn. Whether you work for an employer or you’re out on your own, if you’re going to get better and see positive, long-term results with application security, you have to be willing to see what you’re doing with a critical eye and assume there’s room for improvement. Odds are, there is.1

Kevin Beaver asks:

What kind of testing have you done to improve your application security?

Join the Discussion

This was last published in April 2019

How infrastructure as code tools improve visibility

Do you think infrastructure as code provides enough visibility? Why or why not?

Visibility into cloud infrastructures and applications is important for data security. Learn how to maintain that visibility while using infrastructure as code tools.

Michael Cobb

CISSP-ISSAP – SearchSecurity

When it comes to understanding how all the elements of a computer network connect and interact, it’s certainly true that a picture — or in this case, a network diagram — is worth a thousand words.

A visual representation of a network makes it a lot easier to understand not only the physical topology of the network, its routers, devices, hubs, firewalls and so on, it can also clarify the logical topology of VPNs, subnets and routing protocols that control how traffic flows through the network.

Maintaining visibility across infrastructures and applications is vital to ensure data and resources are correctly monitored and secured. However, research conducted by Dimensional Research and sponsored by Virtual Instruments showed that most enterprises lack the tools necessary to provide complete visibility for triage or daily management. This is a real concern, as poor infrastructure visibility can lead to a loss of control over the network and can enable attackers to remain hidden.

Infrastructure as code, the management of an IT infrastructure with machine-readable scripts or definition files, is one way to mitigate the security risks associated with human error while enabling the rapid creation of stable and consistent but complex environments. However, it’s vital for you to ensure that the resulting network infrastructures are indeed correctly connected and protected and do not drift from the intended configuration.

Infrastructure as code tools

Infrastructure as code tools, such as Cloudcraft and Lucidchart, can automatically create AWS architecture diagrams showing the live health and status of each component, as well as its current configuration and cost. The fact that the physical and logical topology of the network are created directly from the operational AWS configuration, and not what a network engineer thinks the infrastructure as code scripts have created, means it is a true representation of the network, which can be reviewed and audited.

There are similar tools for engineers using Microsoft Azure, such as Service Map and Cloudockit. Security fundamentals don’t change when resources and data are moved to the cloud, but visibility into the network in which they exist does.

Once a network generated using infrastructure as code tools has been audited and its configuration has been secured, it’s important to monitor it for any configuration changes. Unmanaged configuration changes can occur when engineers or developers make direct changes to network resources or their properties in an out-of-band fix without updating the infrastructure as code template or script. The correct process is to make all the changes by updating the infrastructure as code template to ensure all the current and future environments are configured in exactly the same way.

AWS offers a drift detection feature that can detect out-of-band changes to an entire environment or to a particular resource so it can be brought back into compliance. Amazon Virtual Private Cloud Flow Logs is another feature that can be used to ensure an AWS environment is correctly and securely configured.

This tool captures information about the IP traffic going to and from network interfaces, which can be used for troubleshooting and as a security tool to provide visibility into network traffic to detect anomalous activities such as rejected connection requests or unusual levels of data transfer. Microsoft’s Azure Stack and tools such as AuditWolf provide similar functionality to monitor Azure cloud resources.

Security fundamentals don’t change when resources and data are moved to the cloud, but visibility into the network in which they exist does. Any organization with a limited understanding of how its cloud environment is actually connected and secured, or that has poor levels of monitoring, will leave its data vulnerable to attack.

The tools and controls exist to ensure network engineers and developers can enjoy the benefits of infrastructure as code without compromising security. Like all security controls, though, you need to understand them and use them on a daily basis for them to be effective.1

Michael Cobb asks:

Do you think infrastructure as code provides enough visibility? Why or why not?

Join the Discussion

This was last published in April 2019

Refurbished Enterprise-Class Hard Drives Online-Finding and Buying -An honest guide

Post courtesy of:

The Tekmart Sales Team.

A caveated guide and our opinion:-

The Used Hard Drive Guide

Hard drives are the lifesource of your business, whether on your computers and laptops, or within the network infrastructure of your business through servers, SANS, RAIDs and more.

No matter how well-designed or sturdy a hard-drive may be; all hard drives will eventually fail. Sometimes a drive will show symptoms of an impending fail, allowing users time to back up their data and search for a replacement.

Signs of Hard Drive Failure Include:

  • Sluggish functions
  • Read/write errors
  • Abnormal heat output
  • Whirring, clicking, or other sounds

Other times, hard drives will fail without warning – and that total failure can result in the loss of all data from that particular drive. Data recovery process can be expensive, time-consuming, and result in loss of business, and ultimately may not be successful in recovering hundreds of rands’ worth of digital media, thousands of rands’ worth of customer records, financial records, processes and training documents, or more.

Considerations When Buying a Used Hard Drive

If your business is using legacy equipment, replacement parts may have be EOL by the original manufacturer such as EMC, IBM, Dell, Equallogic, Sun, etc. When this happens, if you do not have any spares on-hand, the used/refurbished market is your best bet for finding a replacement.

If you do not have any contacts in the used market, you may be tempted to turn to eBay. Many reputable used companies sell on eBay, but many parts listed on eBay are sold by liquidation companies who do not have the means to test the equipment they acquire, and possess little knowledge about what they’re listing outside of information presented on the label itself.

The danger from buying hard drives on eBay include:

1. Item may not function
2. Item may be listed incorrectly.
3. Hard drive may not have been wiped.
4. Seller may be overseas, or care little for returning the item or troubleshooting problems
5. Limited stock, seller may not be able to replace equipment
6. Risk further downtime dealing with slow shipping or incorrect/faulty products

How to Buy Used Enterprise Equipment Online

If you’re going to buy used hard drives, or other failed IT equipment, it’s pragmatic to buy from a professional and reputable used IT equipment company. Not only can they test equipment and likely have a quality control program in place, they will also have a DOA return policy in place and offer great customer service.

1. Do a Google (or other search engine) search using the part numbers on your failed hard drive. Hint: Using the manufacturer part number may provide the most accurate search results.
2. Look for professional retail websites that offer secure online purchasing (look for the HTTPS in the url, a shield, or badges from Trustwave, Verisign, etc.)
3. Make sure the product listing has an “Add to Cart” or “Buy It Now” button – not just request a quote!
4. If you’re in a pinch, look for sites that offer same-day or overnight shipping. Be sure to read their shipping and return policies.
5. Look for sites with reviews on the used hard drive you will be buying.
6. Avoid sites that offer “instant quotes” or want you to call for pricing – these can take days waiting for responses and force you to compare prices and options from multiple companies. You can also end up on unwanted mailing lists from data mining.

Conclusion on Buying Used Enterprise Hard Drives Online

Buying one or more previously used hard drives can provide you with a quick and inexpensive way to bring your system back to operational. If you’re buying from a trustworthy, knowledgeable business, buying online can be fast, rewarding, and cost-efficient.

Restarting Navisphere Management Server on EMC CLARiiON CX, CX3, CX4-How-to

Post courtesy of:

The Tekmart Support Team.

It may be necessary to restart the Navisphere management server on an EMC CLARiiON CX, CX3, CX4 if any of the problems below present:

  • A Fatal Event icon (red letter “F” in a circle) is displayed for some physical element of array, but Navisphere CLI reports no faults.
  • Host displays a “U” icon even after rebooting host.
  • Navisphere User Interface (UI) is displaying faults that Navisphere CLI is not showing or are different from what Navisphere CLI is reporting.
  • An unmanaged Storage Processor (SP) still has owned LUNs.
  • Navisphere User Interface (UI) hangs or freezes.
  • Navisphere User Interface (UI) is displaying faults but when the faults option is clicked it shows the array is operating normally.
  • Fault on primary array but all indications shows that the array is operating normally.
  • The Management Servers could not be contacted.
  • Clicking Fault icon returns “array is operating normally” message.
  • CX series array does not recognize the new DAE from Navisphere Manager.
  • Fault after replacing Standby Power Supply

Note: The procedure must be performed on both Storage Processors in order to be effective.

  1. Open a new browser window.
  2. Type in the address bar:  http:// xxx.xxx.xxx.xxx/setupWhere xxx.xxx.xxx.xxx is the IP address of the Storage Processor (SP).
  3. When the screen has loaded, type in the Username and Password used to access Navisphere User Interface (UI).
  4. Once logged in, click the “Restart Management Server” button.
  5. Once the page has loaded, click “Yes”, and then click “Submit.”

Determining EMC Hard Drive Part Numbers and Compatibility-a simple guide

Post courtesy of:

The Tekmart Support Team.

As your EMC CLARiiON, VNX, and AX series grow older, sourcing the exact part number replacements for hard drives can get harder and harder. This guide aims to educate you on how to determine the part number and see compatible part numbers for your system.

Determining EMC Hard Drive Part Numbers

There is a good chance that there are many part numbers listed on a single drive pulled from an EMC array. The generic EMC Model Number does not appear on a drive (e.g. EMC CX-SA07-010 1TB SATA Hard Drive). The disk part number (PN) appears on a label on the front of the disk carrier. This is a nine digit Top Level Assembly (TLA) Part Number like PN 005123456. There are several TLA part numbers that fall under the same EMC model number.

Determining EMC Hard DRive TLA Part Number

Example: Your hard drive has a TLA Part number reading 005048797. Your replacement has a TLA part number 005049070. These are both the same EMC hard drive model number CX-SA07-010 and are hot-swappable.

Finding TLA Part Number in Navisphere

Follow these steps to find TLA part number for a drive in a CLARiiON array:

  1. Open Navisphere by typing in the storage processor’s IP address in a web browser.
  2. Open array with the fault. This is usually indicated by a red “F.”
  3. Open Physical.
  4. Open the Bus and Enclosure with the fault.
  5. Open Disks.
  6. Right-click the disk above or below the disk with the fault and select properties. The TLA part number should be listed at the bottom.

Follow these steps to check and retrieve necessary information for single disk failure:

Check the current status:

1.     Log in to Navisphere manager, right-click CLARiiON name and select “Faults.”

2.     Confirm that the drive x_x_x is the only faulty drive that is  showing as “Removed.”

3.     Expand “LUN Folder” and expand “Unowned LUNs.” Make sure no user LUN is unowned. (It’s normal to see hot spares in the unowned LUNs section.)

Get the TLA Part number of the faulty disk:

1.     Right-click SP A or SP B, select “View Events”, and click “Yes” to continue.

2.     Click “Filter” in the new window, uncheck “Warning” and “Information,” and click “OK.”

3.     Locate Event code “0x7127897c” and Description “Disk(Bus x Enclosure x Disk x) failed,” and double-click to open it.

4.     Record the TLA part number in the description field. It is a 9-digit number starting with “005.”

5.     Refer to following format:

Only one disk failure
No Uknown LUN
Disk Slot: x_x_x
Disk P/N 005xxxxxx

Decoding EMC Model Part Numbers

First two numbers/letters in the EMC model part number indicate the product these drives are for.

CX – CX series
AX – AX series
VX/V2/VS/V3/V4 – VNX series

The next four series of numbers indicate Drive Type and Disk Speed (RPM), or in the case of some Fibre Channel drives Data Rate (GB/s) and Disk Speed (RPM)

2G10 – 2GB/s FC 10K
2G15 – 2GB/s FC 15K
2G72 – 2GB/s FC 7.2K
2S10 – 2.5″ SAS 10K
4G10 – 4Gb/s FC 10k
4G15 – 4GB/s FC 15K
AF04 – 4GB/s FC SSD
AT05 – ATA/SATA 5.4K
AT07 – ATA/SATA 7.2K
FC04 – 4 GB/s FC
LP05 – Low Power FC 5.4K
SA07 – SATA 7.2K
S207 – SATA 7.2K
SS07 – SATA 7.2K
SS15 – SAS 15K
PS15 – VNX SAS 15K
VS07 – VNX SAS 7.2K
VS10 – VNX SAS 10K
VS15 – VNX SAS 15K

The last digits in an EMC part number indicate storage capacity.

73 – 72GB
100 – 100GB
200 – 200GB
250 – 250GB
146 – 146GB
300 – 300GB
320 – 320GB
400 – 400GB
450 – 450GB
600 – 600GB
500 – 500GB
750 – 750GB
900 – 900GB
010 – 1TB
020 – 2Tb
030 – 3TB
040 – 4TB

Changing Bus Speed

If you install a 2 Gb legacy disk in a disk-array enclosure (DAE) on a 4 Gb bus, you cannot use the disk in a RAID group or thin pool until you change the bus speed to 2 Gb. You can change the bus speed with the Backend Bus Speed Reset Wizard, which is available from the Service option on the Navisphere Manager Tools menu. The speed reset operation reboots the storage processors.

EMC Hard Drive DAE Compatibility

Some general rules for EMC hard drive compatibility within the same DAE.

  • You can mix 2GB/s and 4GB/s in a single DAE, but the maximum speed will be 2 GB/s for buses connected to the DAE with both these models of disks.
  • CX-AT, and CX-SA model disks cannot co-exist with other disk models in the same DAE

EqualLogic Battery Status Failed – How to Fix

Post credited to:

The Tekmart Support Team.

How to fix the Bad Battery error message with EqualLogic systems

Dell / EqualLogic never created field replaceable units (FRUs) for the controller cache batteries used in different arrays, so there is no easy replacement. The common solution is to replace the entire controller with one that has a battery that hasn’t failed yet.

EqualLogic Battery Status Failed Error

Figure 1. EqualLogic PS4100, PS6100 Battery Status Failed

Unfortunately, purchasing a used replacement EqualLogic controller doesn’t always buy you much time since you’re replacing a failed unit with another aging unit. The majority of EqualLogic systems needing controller replacements are old and out of service, so new controllers haven’t been available for quite some time. The further down the road we get, the more likely it is that simply replacing the failed controller because of a bad battery message, will fail again within 6 months of replacement.

Dell EqualLogic PS4100, PS6100 Series Controller Battery Replacement

EqualLogic Controller Battery Logic 101

You wouldn’t replace your smoke alarm battery with a 9-volt from an old smoke alarm sitting around in a pile of defunct smoke alarms, so why would you replace a failed controller battery with another old dying one?

If it’s just a battery, can’t I replace it myself? Why do I need to buy an entire controller?

The answer here is fairly simple; take one of these controllers apart and find the battery. In the case of the EqualLogic PS4100 and PS6100 series arrays, they are not “batteries” in the normal sense of the word, and again, we are the only ones refurbishing them.

Extended Warranty for EqualLogic Controllers

We provide a standard 90- day warranty, at a minimum, on everything we sell, but with a few items we sell that have batteries in them, we have a requirement that the bad units being replaced be shipped back to us. We provide a pre-paid shipping label so there’s no cost to you. To sweeten the incentive for taking that simple step of putting the failed unit in the box you received your replacement in and putting our label on it, we will upgrade your 90 day warranty to a full one year warranty upon receiving the failed unit back at our warehouse in Alberton, RSA.

Steps to install Docker on Ubuntu 16.04 servers

What issues have arisen during your Docker installation?

If IT wants to maximize Docker’s potential on Ubuntu 16.04 servers, know these steps and commands to ensure a smooth installation.

Jack Wallen

Linux expert – SearchDataCenter

Containers enable organizations to expand beyond the standard server in ways that traditional technologies cannot.

With containers, you can bundle a piece of software within a complete file system that contains everything it needs to run: code, runtime, system tools, system libraries and so on. When you deploy an application or service this way, it will always run the same, regardless of its environment.

If you want to containerize a service or an app, you’ll need to get up to speed with Docker, one of the most popular container tools. Here are some guidelines to install Docker on Ubuntu 16.04 servers and fulfill Docker’s potential.

What to know before you install Docker on Ubuntu 16.04

Before you install Docker on Ubuntu 16.04, update the apt utility — a package manager that includes the aptcommand — and upgrade the server. If apt upgrades the kernel, you may need to reboot. If you need to reboot, do it when the server can be down for a brief period. It’s important to note that you can only install Docker on 64-bit architecture, with a minimum kernel of 3.10.

To update and upgrade, enter the following commands:

sudo apt-get update
sudo apt-get upgrade

Once the update/upgrade is complete, you can install Docker with a single command:

sudo apt-get install -y docker.io

When the install completes, start the Docker engine with the command:

sudo systemctl start docker

Finally, enable Docker to run at boot with the command:

sudo systemctl enable docker

Running Docker as a standard user

Out of the box, you can only use Docker if you’re the root user, or by way of sudo. Since either running Docker as the root user or with sudo can be considered a security risk, it’s crucial to enable a standard user. To do that, you must add the user to the Docker group. Let’s say we’re going to add the user “Olivia” to the Docker group so that she can work with the tool. To do this, issue the following command:

sudo gpasswd -a olivia docker

Restart the Docker service with the command:

sudo systemctl restart docker

Once Olivia logs out and logs back in again, she can use Docker.

Docker terminology

Before we get into the commands to work with Docker, you’ll need to understand some of its terminology.

  • Image: a frozen snapshot of live containers. Images are generally pulled from the Docker Hub, but you can create your own images. Images are read-only.
  • Container: an active, stateful instance of an image that is read-write.
  • Registry: a repository for Docker Images.

 In a nutshell, you pull images from a registry and run containers from those images.

Let’s say you want to run a Debian Linux container so you can test or develop a piece of software. To pull down the Debian image, you should search the registry first. Issue the command docker search debian. The results of that search (Figure A) are important.

Figure A. Docker search command results.

The first two listings are marked as “official.” To be safe, always pull official images. Pull down that debian image with the command:


When the image pull is complete, Docker will report the image debian:latest has been downloaded. To make sure it’s there, use the command:

docker images

Figure B. Debian is ready to run.

You are now ready to create the debian container with the command:


The above command will return a hash to indicate that you’ve created a container.

Figure C. How to create a Debian container.

To run the container, issue the command:

docker run -i -t debian /bin/bash

The above command will run the debian container. It keeps STDIN (standard input) open with the -i option, allocates a pseudo-tty with the -t option, and places you in a Bash prompt so you can work. When you see the Bash prompt change, you’ll know that the command succeeded.

Figure D. The Debian container is now working.

You can work within your container and then exit the container with the command exit.

Commit changes

After you install Docker on Ubuntu 16.04, let’s say you want to develop an image to be used later. When you exit a running container, you will lose all of your changes. If that happens, you cannot commit your changes to a new image. To commit those changes, you first need to run your container in the background (detached), by adding the -d option:

docker run -dit debian

When you run the container like this, you can’t make any changes because you won’t be within the container. To gain access to the container’s shell, issue the command:

docker exec -i -t HASH /bin/bash

HASH is created after running the image in the background — it will be a long string of characters.

Now, you should be inside the running container. Make your changes and then exit the running container with the command exit. If you re-enter the container, your changes should still be present.

To commit your changes to a new image, issue the command:

docker commit HASH NAME

HASH is the hash for our running container and NAME is the name you’ll give the new image.

If you now issue the command docker images, your newly created image will be listed alongside the image you pulled from the Docker Hub registry.

Figure E. The newly created test image.

Next Steps

This was last published in April 2017

80 Linux commands you’ll actually use

Enterprise administrators and managers who use this guide of essential Linux commands, utilities and tools will find ways to manage files, get process status updates and more.

Jessica Lulka

Associate site editor – SearchDataCenter

Linux administrators cannot live by the graphical user interface alone. That’s why we’ve compiled useful Linux commands into this convenient guide.

By learning how to use a few simple tools, command-line cowards can become scripting commandos and get the most out of Linux by executing kernel and shell commands. 


The alias command is a way to run a command or a series of Unix commands using a shorter name than those that are usually associated with such commands.

The apt-get tool automatically updates a Debian machine and installs Debian packages/programs.

AWK, Gawk
AWK is a programming language tool used to manipulate text. The AWK utility resembles the shell programming language in many areas, but AWK’s syntax is very much its own. Gawk is the GNU Project’s version of the AWK programming language.

A portable, fast, open source program that compresses and decompresses files at a high rate, but that does not archive them.

A Unix/Linux command that can read, modify or concatenate text files. The cat command also displays file contents.

The cd command changes the current directory in Linux and can conveniently toggle between directories. The Linux cd command is similar to the CD and CHDIR commands in MS-DOS.

The chmod command changes the permissions of one or more files. Only the file owner or a privileged user can change the access mode.

The chown prompt changes file or group ownership. It gives admins the option to change ownership of all the objects within a directory tree, as well as the ability to view information on the objects processed.

The cmp utility compares two files of any type and writes the results to the standard output. By default, cmp is silent if the files are the same. If they differ, cmp reports the byte and line number where the first difference occurred.

Admins use comm to compare lines common to file1 and file2. The output is in three columns; from left to right: lines unique to file1, lines unique to file2 and lines common in both files.

The cp command copies files and directories. Copies can be made simultaneously to another directory even if the copy is under a different name.

The cpio command copies files into or out of a cpio or tar archive. A tar archive is a file that contains other files, plus information about them, such as their file name, owner, timestamps and access permissions. The archive can be another file on the disk, a magnetic tape or a pipe. It also has three operating modes: copy-out, copy-in and copy-pass. It is also­ a more efficient alternative to tar.

CRON is a Linux system process that executes a program at a preset time. To use a CRON script, admins must prepare a text file that describes the program and when they want CRON to execute it. Then, the crontab program loads the text file and executes the program at the specified time.

Admins use cURL to transfer a URL. It is useful for determining if an application can reach another service and how healthy the service is.


The declare command states variables, gives them attributes or modifies the properties of variables.

This command displays the amount of disk space available on the file system containing each file name argument. With no file name, the df command shows the available space on all the currently mounted file systems.


Use echo to repeat a string variable to standard output.

The enable command stops or starts printers and classes.

The env command runs a program in a modified environment or displays the current environment and its variables.

The eval command analyzes several arguments, concatenates them into a single command and reports on that argument’s status.

This function replaces the parent process with any subsequently typed command. The exec command treats its arguments as the specification of one or more subprocesses to execute.

The exit command terminates a script and returns a value to the parent script.

The expect command talks to other interactive programs via a script and waits for a response, often from any string that matches a given pattern.

The export command converts a file into a different format than its current format. Once a file is exported, it can be accessed by any application that uses the new format.


The find command searches the directory tree to locate particular groups of files that meet specified conditions, including -name, -type, -exec, -size, -mtime and -user.

The for and while commands execute or loop items repeatedly as long as certain conditions are met.

With the free command, admins can see the total amount of free and used physical memory and swap space in the system, as well as the buffers and cache used by the kernel.


See AWK.

The grep command searches files for a given character string or pattern and can replace the string with another. This is one method of searching for files within Linux.

This is the GNU Project’s open source program for file compression that compresses webpages on the server end for decompression in the browser. This is popular for streaming media compression and can simultaneously concatenate and compress several streams.


The history function shows all the commands used since the start of the current session.


The iconfig command configures kernel-resident network interfaces at boot time. It is usually only needed when debugging or during system tuning.

With ifup, admins can configure a network interface and enable a network connection.

The ifdown command shuts down a network interface and disables a network connection.

iptablesThe iptables command allows or blocks traffic on a Linux host and can prevent certain applications from receiving or transmitting a request.


With kill signals, admins can send a specific signal to a process. It is most often used to safely shut down processes or applications.


The less command lets an admin scroll through configuration and error log files, displaying text files one screen at a time with backward or forward navigation available.

The locate command reads one or more databases and writes file names to match certain output patterns.

The lft command determines connection routes and provides information to debug connections or find a box/system location. It also displays route packets and file types.

The ln command creates a new name for a file using hard linking, which allows multiple users to share one file.

The ls command lists files and directories within the current working directory, which allows admins to see when configuration files were last edited.

Admins use lsof to list all the open files. They can add -u to find the number of open files by username.

The lsmod command displays a module’s status within the kernel, which helps troubleshoot server function issues.

The man command allows admins to format and display the user manual that’s built into Linux distributions, which documents commands and other system aspects.

Similar to less, more pages through text one screen at a time, but has limitations on file navigation.

This command mounts file systems on servers. It also lists the current file systems and their mount locations, which is useful to locate a defunct drive or install a new one.

Linux mkdir generates a new directory with a name path.


Gnome GUI tool that allows admins to specify the information needed to set up a network card.

Admins can use netconfig to configure a network, enable network products and display a series of screens that ask for configuration information.

This command provides information and statistics about protocols in use and current TCP/IP network connections. It is a helpful forensic tool for figuring out which processes and programs are active on a computer and are involved in network communications.

A user can enter a host name and find the corresponding IP address with nslookup. It can also help find the host name.

The od command dumps binary files in octal — or hex/binary — format to standard output.

Admins use passwd to update a user’s current password.

The ping command verifies that a particular IP address exists and can accept requests. It can test connectivity and determine response time, as well as ensure an operating user’s host computer is working.

Admins use ps to report the statuses of current processes in a system.

The print working directory (pwd) command displays the name of the current working directory.

The read command interprets lines of text from standard input and assigns values of each field in the input line to shell variables for further processing.

This command syncs data from one disk or file to another across a network connection. It is similar to rcp, but has more options.

The GNU screen utility is a terminal multiplexor where a user can use a single terminal window to run multiple terminal applications or windows.

Admins use sdiff to compare two files and produce a side-by-side listing indicating lines that are dissimilar. The command then merges the files and outputs the results to the outfile.

The sed utility is a stream editor that filters text in a pipeline, distinguishing it from other editors. It takes text input, performs operations on it and outputs the modified text. This command is typically used to extract part of a file using pattern matching or to substitute multiple occurrences of a string within a file.

This command is the quickest way to start or stop a service, such as networking.

The shutdown command turns off the computer and can be combined with variables such as -h for halt after shutdown or -r for reboot after shutdown.

Like locate, slocate, or secure locate, provides a way to index and quickly search for files, but it can also securely store file permissions and ownership to hide information from unauthorized users.

Snort is an open source network intrusion detection system and packet sniffer that monitors network traffic. It looks at each packet to detect dangerous payloads or suspicious anomalies. Snort is based on libpcap.

This command sorts lines of text alphabetically or numerically according to the fields. Users can input multiple sort keys.

The sudo command lets a system admin give certain users the ability to run some — or all — commands at the root level and logs all the commands and arguments.

SSH is a command interface for secure remote computer access and is used by network admins to remotely control servers.

The tar command lets users create archives from a number of specified files or to extract files from a specific archive.


The tail command displays the last few lines of the file. This is particularly helpful for troubleshooting code because admins don’t often need all the possible logs to determine code errors.

TOP is a set of protocols for networks that performs distributed information processing and displays the tasks on the system that take up the most memory. TOP can sort tasks by CPU usage, memory usage and runtime.

Admins can create a blank file within Linux with the touch command.

This command translates or deletes characters from a text stream. It writes to a standard output, but it does not accept file names as arguments — it only accepts input from standard input.

The traceroute function determines and records a route through the internet between two computers and is useful for troubleshooting network/router issues. If the domain does not work or is not available, admins can use traceroute to track the IP.

This function displays the current operating system name and can print system information.

With uniq, admins can compare adjacent lines in a file and remove or identify any duplicate lines. 

The vi environment is a text editor that allows a user to control the system with just the keyboard instead of both mouse selections and keystrokes.

The vmstat command snapshots everything in a system and reports information on such items as processes, memory, paging and CPU activity. This is a good method for admins to use to determine where issues/slowdown may occur in a system.

This is a network utility that retrieves web files that support HTTP, HTTPS and FTP protocols. The wget command works non-interactively in the background when a user is logged off. It can create local versions of remote websites and recreate original site directories.

See for.

The whoami command prints or writes the user login associated with the current user ID to the standard output.

Admins use xargs to read, build and execute arguments from standard input. Each input is separated by blanks.