Inside ‘Master134′: Ad networks’ ‘blind eye’ threatens enterprises

How should security vendors handle ad networks that are repeatedly tied to malvertising campaigns?

Online ad networks linked to the Master134 malvertising campaign and other malicious activity often evade serious fallout and continue to operate unabated.

Rob Wright

Associate Editorial Director – TechTarget – SearchSecurity

The online advertising networks implicated in the “Master134” malvertising campaign have denied any wrongdoings, but experts say their willingness to turn a blind eye to malicious activity on their platforms will likely further jeopardize enterprises.

In total, eight online ad firms — Adsterra, AdKernel, AdventureFeeds, EvoLeads, ExoClick, ExplorAds, Propeller Ads and Yeesshh — were connected to the Master134 campaign, and many of them presented similar explanations about their involvement with the malvertising campaign.

They insisted they didn’t know what was going on and when informed of the malvertising activity, they immediately intervened by suspending the publisher accounts of the malicious actors abusing their platforms. However, none of the ad networks were willing to provide names or account information of the offending clients, citing vague company privacy policies and government regulations that prevented them from doing so.

A cybersecurity vendor executive, who wished to remain anonymous, said it’s likely true that the ad networks were unaware of the Master134 campaign. However, the executive, who has worked extensively on malvertising and ad fraud-related campaigns, said that unawareness is built by design.

“They don’t necessarily know, and they don’t want to know, where the traffic is coming from and where it’s going because their businesses are based on scale,” the executive said. “In order to survive, they have to ignore what’s going on. If they look at how the sausage is made, then they’re going to have issues.”


How the Master134 campaign worked.

The use of various domains, companies, redirection stages and intermediaries make it difficult to pinpoint the source of malicious activity in malvertising schemes. Tamer Hassan, CTO and co-founder of White Ops, a security vendor focused on digital ad fraud, said complexity makes the ecosystem attractive to bad actors like malware authors and botnet operators, as well as ad networks that prefer to look the other way.

There aren’t a lot of ad companies that work directly with malware operators, but there are a lot of ad companies that don’t look at this stuff closely because they don’t want to lose money.

Tamer Hassan

CTO and co-founder, White Ops

“It’s easy to make it look like you’re doing something for security if you’re an ad network,” Hassan said. “There aren’t a lot of ad companies that work directly with malware operators, but there are a lot of ad companies that don’t look at this stuff closely because they don’t want to lose money.”

“Malware Breakdown,” an anonymous security researcher that documented early Master134 activity in 2017, offered a similar view of the situation. The researcher told SearchSecurity that because Propeller Ads’ onclkds.com domain was being used to redirect users to a variety of different malvertising campaigns, they believed “the ad network was being reckless or turning a blind eye to abuse.”

In some cases, Hassan said, smaller ad networks and domains are created expressly for fraud and malvertising purposes. He cited the recent Methbot and 3ve campaigns, which used several fraudulent ad networks that appeared to be legitimate companies in order to conduct business with other networks, publishers and advertisers.

“The ad networks were real, incorporated companies,” he said, “but they were purpose-built for fraud.”

Even AdKernel acknowledges the onion-like ecosystem is full of bad publishers and advertisers.

“In ad tech, the situation is exacerbated because there are many collusion players working together,” said Judy Shapiro, chief strategy advisor for AdKernel, citing bad publishers and advertisers. “Even ad networks don’t want to see impressions go down a lot because they, too, are also paid on a [cost per impression] basis by advertisers.”

There is little indication, however, that these online ad tech companies have changed how they do business.

Lessons learned?

Following the publication of the Master134 report, Check Point researchers observed some changes in activity.

Lotem Finkelsteen, Check Point Research’s threat intelligence analysis team leader and one of the contributors to the Master134 report, said there appeared to be less hijacked traffic going to the exploit kit domains, which suggested the ad networks in the second redirection stage — ExoClick, AdventureFeeds, EvoLeads, ExplorAds and Yeesshh — had either been removed from the campaign by the Master134 threat actors or had voluntarily detached themselves (Yeesshh and ExplorAds closed down the domains used in the campaign sometime in December).

But Adsterra is another story. More than six months after the report was published, Finkelsteen said, there’s been no indication the company has changed its behavior.

Meanwhile, the Master134 campaign changed somewhat in the aftermath of Check Point’s report. The threat actors behind the 134.249.116.78 IP address changed the redirection paths and run traffic through other ad networks, Finkelsteen said.

Aviran Hazum, mobile threat intelligence team leader at Check Point Research, noted on Twitter in September that the campaign had a “new(ish) URL pattern” that moved hijacked traffic through suspicious redirection domains and ad networks like PopCash, a Romanian pop-under ad network that was blocked by Malwarebytes for ties to malicious activity.

AdKernel said it learned a lesson from the Master134 campaign and pledged to do more to remove bad actors from its network. However, a review of several of the domains that bear the “powered by AdKernel” moniker suggests the company hasn’t successfully steered away from suspicious ad networks or publishers.

For example, one ad network customer named AdTriage has a domain called xml.adtriage.com that looks exactly like the self-service portals on the junnify and bikinisgroup sites that were also “powered” by AdKernel. AdTriage, however, doesn’t appear to be a real company — adtriage.com is filled with “Lorem Ipsum” dummy text. On the “About Us” page, the “Meet our team” section has nothing except text that says “Pics Here.” (WhoIs results show the domain was created in 2011, and captures of the sites from that year on Internet Archive’s Wayback Machine reveal the same dummy text.)


AdTriage’s site is filled with dummy text.

Escaping consequences

The recent history of malvertising indicates ad companies that issue denials are quite capable of moving onto the next client campaign, only to issue similar denials and reassurances for future incidents with little to no indication that their security practices have improved.

Check Point’s Master134 report, as well as earlier malvertising research from FireEye and Malwarebytes, doesn’t appear to have had much, if any, effect on the reputations of the five companies. They all appear to be in good standing with the online ad industry and have seemingly avoided any long-term consequences from being associated with malicious activity.

ExoClick and Adsterra, for example, have remained visible through sponsorships and exhibitions at industry events, including The European Summit 2018 and Mobile World Congress 2019.

Online ad companies are often given the benefit of the doubt in malvertising cases such as Master134 for two primary reasons: Ad networks are legitimate companies, not threat groups, and digital ads are easy for threat actors to take advantage of without the help or complicit knowledge of those networks.

But Check Point Research’s team has little doubt about the involvement of the ad networks in Master134; either they turned a blind eye to the obvious signs of malicious activity, Finkelsteen said, or openly embraced them to generate revenue.

Other security vendors have also publicized malvertising campaigns that redirect traffic to known exploit kits. FireEye reported in September that a malvertising campaign used the Fallout exploit kit to spread the GandCrab ransomware to victims primarily in Southeast Asia. According to the report, the malicious ads profiled users’ browsers and operating systems.

“Depending on browser/OS profiles and the location of the user, the malvertisement either delivers the exploit kit or tries to reroute the user to other social engineering campaigns,” the report stated.

It’s difficult to determine exactly how much of the online ad ecosystem has been compromised by malicious or unscrupulous actors, said Adam Kujawa, director of malware intelligence at Malwarebytes.

“Advertising is the reason the internet exists as it does today,” he said. “It’s always going to be very close to the heart of all things that happen on the internet. The reason we see so much adware is because these companies kind of … live in a gray area.”

The gray area can be even murkier on the technical side. Despite being key components in the Master134 campaign, the xml.bkinisgroup.com and xml.junnify.com URLs raise only a few alarms on public URL and file scanning services.

VirusTotal, for example, shows that all 67 malware engines used in the scans rate both domains as “clean,” though the Junnify domain did receive a -28 community score (VirusTotal community scores, which start at zero, represent the number of registered users who vote a file or URL as being safe or unsafe).

Malvertising campaigns like Master134 that use multiple traffic flows and advertising platforms could become increasingly common, according to the Check Point report.

“Due to the often complex nature of malware campaigns, and the lack of advanced technology to vet and prevent malicious adverts from being uploaded onto ad-network bidding platforms,” the researchers wrote, “it is likely we will see more malvertising continue to be a popular way for cybercriminals to gain illegal profits for many years to come.”

Rob Wright asks:

What steps does your organization take to prevent malvertising threats?

Join the Discussion

This was last published in April 2019

‘Master134’ malvertising campaign raises questions for online ad firms

Have you fallen prey to Master134 and what did you do? How should the infosec industry handle persistent malvertising threats?

Malvertising and adware schemes are a growing concern for enterprises. Our deep investigation into one campaign reveals just how complicated threats can be to stop.

Rob Wright

Associate Editorial Director – TechTarget – SearchSecurity

Why were several major online advertising firms selling traffic from compromised WordPress sites to threat actors operating some of the most dangerous exploit kits around?

That was the question at the heart of a 2018 report from Check Point Research detailing the inner workings of an extensive malvertising campaign it calls “Master134,” which implicated several online advertising companies. According to the report, titled “A Malvertising Campaign of Secrets and Lies,” a threat actor or group had compromised more than 10,000 vulnerable WordPress sites through a remote code execution vulnerability that existed on an older version of the content management system.

Malvertising is a common, persistent problem for the information security industry, thanks to the pervasiveness of digital ads on the internet. Threat actors have become adept at exploiting vulnerable technology and lax oversight in the online ad ecosystem, which allows them to use ads as a delivery mechanism for malware. As a result, many security experts recommend using ad blockers to protect endpoints from malvertising threats.

But Master134 was not a typical malvertising campaign.

A tangled web of redirects

Rather than using banner ads as a vector for malware infection, threat actors relied on a different component of the digital advertising ecosystem: web traffic redirection. In addition to serving digital ads, many ad networks buy and sell traffic, which is then redirected and used to generate impressions on publishers’ ads. These traffic purchases are made through what’s known as real-time bidding (RTB) platforms, and they are ostensibly marketed as legitimate or “real” users, though experts say a number of nefarious techniques are used to artificially boost impressions and commit ad fraud. These techniques include the use of bots, traffic hijacking and malicious redirection codes.

Threat actors never cease to look for new techniques to spread their attack campaigns, and do not hesitate to utilize legitimate means to do so.

Check Point Research’s report, ‘A Malvertising Campaign of Secrets and Lies’

According to Check Point Research, part of Check Point Software Technologies, Master134 was an unusually complex operation involving multiple ad networks, RTB platforms and traffic redirection stages. Instead of routing the hijacked WordPress traffic to malicious ads, the threat actors redirected the traffic intended for those sites to a remote server located in Ukraine with the IP address “134.249.116.78,” hence the name Master134. (Check Point said a second, smaller source of traffic to the Master134 server was a PUP that redirected traffic intended for victims’ homepages.)

Then, the Master134 campaign redirected the WordPress traffic to domains owned by a company known as Adsterra, a Cyprus-based online ad network. Acting as a legitimate publisher, Master134 sold the WordPress traffic to Adsterra’s network to other online ad companies, namely ExoClick, EvoLeads, AdventureFeeds and AdKernel.

From there, the redirected WordPress traffic was resold a second time to threat actors operating some of the most well-known malicious sites and campaigns in recent memory, including HookAds, Seamless and Fobos. The traffic was redirected a third and final time to “some of the exploit kit land’s biggest players,” according to Check Point’s report, including the RIG and Magnitude EKs.

The researchers further noted that all of the Master134 traffic ended up in the hands of threat actors and was never purchased by legitimate advertisers. That, according to Check Point, indicated “an extensive collaboration between several malicious parties” and a “manipulation of the entire online advertising supply chain,” rather than a series of coincidences.


The redirection/infection chain of the Master134 campaign.

Why would threat actors and ad networks engage in such a complex scheme? Lotem Finkelsteen, Check Point’s threat intelligence analysis team leader and one of the contributors to the Master134 report, said the malvertising campaign was a mutually beneficial arrangement. The ad companies generate revenue off the hijacked WordPress traffic by reselling it. The Master134 threat actors, knowing the ad companies have little to no incentive to inspect the traffic, use the ad network platforms as a distribution system to match potential victims with different exploit kits and malicious domains.

“In short, it seems threat actors seeking traffic for their campaigns simply buy ad space from Master134 via several ad-networks and, in turn, Master134 indirectly sells traffic/victims to these campaigns via malvertising,” Check Point researchers wrote.

Check Point’s report was also a damning indictment of the online ad industry. “Indeed, threat actors never cease to look for new techniques to spread their attack campaigns, and do not hesitate to utilize legitimate means to do so,” the report stated. “However, when legitimate online advertising companies are found at the heart of a scheme, connecting threat actors and enabling the distribution of malicious content worldwide, we can’t help but wonder — is the online advertising industry responsible for the public’s safety?”

Other security vendors have noted that malvertising and adware schemes are evolving and becoming increasingly concerning for enterprises. Malwarebytes’ “Cybercrime Tactics and Techniques” report for Q3 2018, for example, noted that adware detections increased 15% for businesses while dropping 19% for consumers. In addition, the report noted a rise in new techniques such as adware masquerading as legitimate applications and browser extensions for ad blockers and privacy tools, among other things.

The malvertising Catch-22

The situation has left both online ad networks and security vendors in a never-ending game of whack-a-mole. Ad companies frequently find themselves scrutinized by security vendors such as Check Point in reports on malvertising campaigns. The ad companies typically deny any knowledge or direct involvement in the malicious activity while removing the offending advertisements and publishers from their networks. However, many of those same ad networks inevitably end up in later vendor reports with different threat actors and malware, issuing familiar denials and assurances.

Meanwhile, security vendors are left in a bind: If they ban the ad networks’ servers and domains in their antimalware or network security products, they effectively block all ads coming from repeat offenders, not just the malicious ones, which hurts legitimate publishers as well as the entire digital advertising ecosystem. But if vendors don’t institute such bans, they’re left smacking down each new campaign and issuing sternly worded criticisms to the ad networks.

That familiar cycle was on display with Master134; following Check Point’s publication of the report on July 30, three of the online ad companies — Adsterra, ExoClick and AdKernel — pushed back on the Check Point report and adamantly denied they were involved in the Master134 scheme (EvoLeads and AdventureFeeds did not comment publicly on the Master134 report). The companies claimed they are leading online advertising and traffic generation companies and were not directly involved in any illegitimate or malicious activity.


How the Master134 campaign worked.

Check Point revised the report on August 1 and removed all references to one of the companies, New York-based AdKernel LLC, which had argued the report contained false information. Check Point’s original report incorrectly attributed two key redirection domains — xml.bikinisgroup.com and xml.junnify.com — to the online ad company. As a result, several media outlets, including SearchSecurity, revised or updated their articles on Master134 to clarify or completely remove references to AdKernel.

But questions about the Master134 campaign remained. Who was behind the bikinisgroup and junnify domains? What was AdKernel’s role in the matter? And most importantly: How were threat actors able to coordinate substantial amounts of hijacked WordPress traffic through several different networks and layers of the online ad ecosystem and ensure that it always ended up on a select group of exploit kit sites?

A seven-month investigation into the campaign revealed patterns of suspicious activity and questionable conduct among several ad networks, including AdKernel. SearchSecurity also found information that implicates other online advertising companies, demonstrating how persistent and pervasive malvertising threats are in the internet ecosystem.1

Rob Wright asks:

How should the infosec industry handle persistent malvertising threats?

Join the Discussion

This was last published in April 2019

How infrastructure as code tools improve visibility

Do you think infrastructure as code provides enough visibility? Why or why not?

Visibility into cloud infrastructures and applications is important for data security. Learn how to maintain that visibility while using infrastructure as code tools.

Michael Cobb

CISSP-ISSAP – SearchSecurity

When it comes to understanding how all the elements of a computer network connect and interact, it’s certainly true that a picture — or in this case, a network diagram — is worth a thousand words.

A visual representation of a network makes it a lot easier to understand not only the physical topology of the network, its routers, devices, hubs, firewalls and so on, it can also clarify the logical topology of VPNs, subnets and routing protocols that control how traffic flows through the network.

Maintaining visibility across infrastructures and applications is vital to ensure data and resources are correctly monitored and secured. However, research conducted by Dimensional Research and sponsored by Virtual Instruments showed that most enterprises lack the tools necessary to provide complete visibility for triage or daily management. This is a real concern, as poor infrastructure visibility can lead to a loss of control over the network and can enable attackers to remain hidden.

Infrastructure as code, the management of an IT infrastructure with machine-readable scripts or definition files, is one way to mitigate the security risks associated with human error while enabling the rapid creation of stable and consistent but complex environments. However, it’s vital for you to ensure that the resulting network infrastructures are indeed correctly connected and protected and do not drift from the intended configuration.

Infrastructure as code tools

Infrastructure as code tools, such as Cloudcraft and Lucidchart, can automatically create AWS architecture diagrams showing the live health and status of each component, as well as its current configuration and cost. The fact that the physical and logical topology of the network are created directly from the operational AWS configuration, and not what a network engineer thinks the infrastructure as code scripts have created, means it is a true representation of the network, which can be reviewed and audited.

There are similar tools for engineers using Microsoft Azure, such as Service Map and Cloudockit. Security fundamentals don’t change when resources and data are moved to the cloud, but visibility into the network in which they exist does.

Once a network generated using infrastructure as code tools has been audited and its configuration has been secured, it’s important to monitor it for any configuration changes. Unmanaged configuration changes can occur when engineers or developers make direct changes to network resources or their properties in an out-of-band fix without updating the infrastructure as code template or script. The correct process is to make all the changes by updating the infrastructure as code template to ensure all the current and future environments are configured in exactly the same way.

AWS offers a drift detection feature that can detect out-of-band changes to an entire environment or to a particular resource so it can be brought back into compliance. Amazon Virtual Private Cloud Flow Logs is another feature that can be used to ensure an AWS environment is correctly and securely configured.

This tool captures information about the IP traffic going to and from network interfaces, which can be used for troubleshooting and as a security tool to provide visibility into network traffic to detect anomalous activities such as rejected connection requests or unusual levels of data transfer. Microsoft’s Azure Stack and tools such as AuditWolf provide similar functionality to monitor Azure cloud resources.

Security fundamentals don’t change when resources and data are moved to the cloud, but visibility into the network in which they exist does. Any organization with a limited understanding of how its cloud environment is actually connected and secured, or that has poor levels of monitoring, will leave its data vulnerable to attack.

The tools and controls exist to ensure network engineers and developers can enjoy the benefits of infrastructure as code without compromising security. Like all security controls, though, you need to understand them and use them on a daily basis for them to be effective.1

Michael Cobb asks:

Do you think infrastructure as code provides enough visibility? Why or why not?

Join the Discussion

This was last published in April 2019

When a NoOps implementation is — and when it isn’t — the right choice

How does your organization approach automation and NoOps initiatives?

NoOps skills and tools are highly useful regardless of the IT environment, but site reliability engineering brings operations admins into development when an organization can’t afford to lose them.

Emily Mell

Assistant Site Editor – SearchITOperations

For some organizations, NoOps is a no-go; for others, it’s the only way to go.

Some organizations envision that NoOps’ benefits of peak infrastructure and application automation and abstraction eliminate the need for operations personnel to manage the IT environment.

But not every IT environment is cut out for a NoOps implementation. Site reliability engineering (SRE) is a useful middle ground for organizations that aren’t ready or equipped for NoOps.

Will NoOps replace DevOps?

The truth is that NoOps is a scenario reserved almost exclusively for startup organizations that start out on A-to-Z IT automation tools and software. In cloud deployments, with no hardware or data centers to manage, developers can code or automate operations tasks — and operations administrators might struggle to find a seat at the table.

Instead, organizations that start from day one with automated provisioning, deployment, management and monitoring should hire an operations specialist or an IT infrastructure architect to help the development team set up environments and pipelines, said Maarten Moen, consultant at 25Friday, a consulting agency based in Hilversum, North Holland, which helps startups set up NoOps environments.

Development or technology engineers move more to a full-stack engineering background. They can do the front and back end, but also the infrastructure and the cloud structure behind it. Maarten Moenconsultant, 25Friday

DevOps is still too siloed and interdependent for the speed of modern business activity, Moen said. This assessment holds weight for the startups 25Friday advises: there is no reason for a five-person company to split into distinct development and operations teams.

Instead, Moen suggests organizations instate a center of excellence, with one to three senior-level operations consultants or advisors to help development teams set up — but not implement — the infrastructure and share best practices. Development teams implement the infrastructure so that they’re familiar with how it works and how to maintain it.

Legacy applications are incompatible with NoOps

A NoOps changeover won’t work for organizations with established legacy applications — which, in this case, means any app five years old or more — and sizeable development and operations teams. The tools, hardware and software necessary to maintain these apps require operational management. Odds are high that a legacy application would need to be completely rebuilt to accommodate NoOps, which is neither cost-effective nor reasonably feasible.

Moreover, most organizations don’t have a loosely coupled system that facilitates a NoOps structure, said Gary Gruver, consultant and author of several DevOps books. Small independent teams can only do so much, and building a wall between them doesn’t make sense. In the end, someone must be accountable to ensure that the application and infrastructure functions in production, both on premises and in cloud environments, he said.

NoOps vs. SRE

When a larger organization adopts infrastructure-as-code tools, senior operations staff often act as advisors, and they then shift into engineering roles.

But downsizing isn’t always the answer. The SRE role, which emphasizes automation to reduce human error and ensure speed, accuracy and reliability, has supplanted the operations administrator role in many IT organizations because it places ops much closer to, or even within, the development team.

“It’s operations work, and it has been, whether we’re analysts, sys admins, IT operations, SREs, DevOps — it doesn’t matter,” said Jennifer Davis, principle SRE at RealSelf Inc., a cosmetic surgery review company based in Seattle, Wash., and an O’Reilly Media author, in a talk at the Velocity conference last month in New York.

Operations work can be done by anyone, but organizations differ on how they handle that fact. Does an organization eliminate the operations team altogether or simply reorganize them into SREs and retain their operational knowledge and experience?

At RealSelf, Davis’ SRE team educates and mentors developers in all aspects of operations via positions in sprint teams.

“Some of the critical skills that we have as operations folks [are] filling in gaps with relevant information, identifying information that is nonessential, and prioritizing a wide array of work,” she said.*

Is NoOps feasible?

Basically, NoOps is the same thing as no pilots or no doctors.

Jennifer Davis,principle SRE, RealSelf

NoOps implementations by 25Friday have been successful, Moen said. RealSelf’s Davis, however, argues that NoOps is, in general, not feasible for non-startups.

“Basically, NoOps is the same thing as no pilots or no doctors,” Davis said. “We need to have pathways to use the systems and software that we create. Those systems and software are created by humans — who are invaluable — but they will make mistakes. We need people to be responsible for gauging what’s happening.”

Human fallibility has driven the move to scripting and automation in IT organizations for decades. Companies should strive to have as little human error as possible, but also recognize that humans are still vital for success.

Comprehensive integration of AI into IT operations tools is still several years away, and even then, AI will rely on human interaction to operate with the precision expected. Davis likens the situation to the ongoing drive for autonomous cars: They only work if you eliminate all the other drivers on the road.

Next steps for operations careers

As IT organizations adopt modern approaches to deployment and management, such as automated tools and applications that live on cloud services, the operations team will undoubtedly shrink. Displaced operations professionals must then retrain in other areas of IT.

Some move into various development roles, but quality assurance and manual testing are popular career shifts in Europe, especially for old-fashioned professionals whose jobs have been automated out of their hands, Moen said. SRE is another path that requires additional training, but not a complete divergence from one’s existing job description.

Admins should brush up on scripting skills, as well as the intricacies of distributed platforms to be ready to both hand over any newer self-service infrastructure to developers and train them to use it properly. They should study chaos engineering to help developers create more resilient applications and team up with both the ops and dev teams to create a best practices guideline to manage technical debt in the organization. This becomes paramount as infrastructure layouts grow more complex.

NoOps and SRE are both possible future directions on IT organizations’ radars. NoOps mini-environments can live within a fuller DevOps environment, which can be managed by site reliability engineers. Ultimately, the drive for automation must reign, but not with the extermination of an entire profession that ensures its continued success.

Site editor David Carty contributed to this article.

*Information changed after publication.

This was last published in November 2018

AIOps tools supplement — not supplant — DevOps pipelines

How does your DevOps team apply AIOps to the data your CI/CD toolchain churns out?

While the line between DevOps and AIOps often seems blurred, the two disciplines aren’t synonymous. Instead, they present some key differences in terms of required skill sets and tools.

Will Kelly

DevOpsAgenda

Artificial intelligence for IT operations, or AIOps, applies AI to data-intensive and repetitive tasks across the continuous integration and development toolchain. 

DevOps professionals cite monitoring, task automation and CI/CD pipelines as prime areas for AIOps tools, but there’s still a lack of clarity around when and how broadly teams should apply AI practices.

Where AI meets IT ops

The terms AIOps and DevOps are both common in product marketing, but they’re not always used accurately. DevOps is driving a cultural shift in how organizations are structured, said Andreas Grabner, DevOps activist at Dynatrace, an application performance management company based in Waltham, Mass.

AIOps tools enable an IT organization’s traditional development, test and operations teams to evolve into internal service providers to meet the current and future digital requirements of their customers — the organization’s employees.

AIOps platforms can also help enterprises monitor data across hybrid architectures that span legacy and cloud platforms, Grabner said. These complex IT environments demand new tools and technologies, which both require and generate more data. Organizations need a new approach to capture and manage that data throughout the toolchain — which, in turn, drives the need for AIOps tools and platforms.

AIOps can also be perceived as a layer that runs on top of DevOps tools and processes, said Darren Chait, COO and co-founder of Hugo, a provider of team collaboration tools based in San Francisco. Organizations that want to streamline data-intensive, manual and repetitive tasks — such as ticketing — are good candidates for an AIOps platform proof-of-concept project.

In addition, AIOps tools offer more sophisticated monitoring capabilities than other software in the modern DevOps stack. AIOps tools, for example, monitor any changes in data that might have a significant effect on the business, such as those related to performance and infrastructure configuration drift. That said, AIOps tools might be unnecessary for simple monitoring requirements that are linear and straightforward.

The line between AIOps and DevOps

DevOps and AIOps tools are both useful in CI/CD pipelines and for production operations tasks, such as monitoring, systems diagnosis and incident remediation. But while there is some overlap between them, each of these tool sets is unique in its requirements for effective implementation. For example, AIOps automates machine language model training to complete a software build. AIOps tools must be adaptive to machine-learning-specific workflows that can handle recursion to support continuous machine language model training.

The AIOps automation design approach is fundamentally different from the repetition of the machine language training process: It’s recursive and conditional in nature, largely dependent upon the accuracy rating of procured data. The design approach also depends on selective data-extraction algorithms.

In terms of tool sets, DevOps engineers see Jenkins, CircleCI, Travis, Spinnaker and Jenkins X as CI/CD industry standards, but they aren’t AIOps-ready like tools such as Argo — at least not yet.

So, while AIOps augments DevOps with machine learning technology, AIOps isn’t the new DevOps — and ops teams should ignore the hype that tells them otherwise.

This was last published in January 2019

What roles do vCPE and uCPE have at the network edge?

Why might your organization avoid implementing universal or virtual CPE?

As service providers look to virtualize the edge and deliver network services faster, they’re turning to vCPE and uCPE that run services as software on generic hardware.

John Burke

CIO and Principal Research Analyst – Nemertes Research – SearchEnterpriseWAN

While software-defined networking is only just starting to gain significant traction within enterprise networks, it has transformed network service providers. They are bringing SDN technology to the enterprise edge now, in the form of virtualized customer premises equipment, or vCPE.

In the past, a provider delivering a set of network services would use specialized hardware to deliver the services. This hardware was most often a branch router, and it sometimes included one or more separate firewalls, a distributed denial-of-service defense device or WAN optimizer — all in the branch.

Now, the goal is to have generic hardware at the branch and have it run some or all of the software needed to provide all of the desired services.

Shifting the burden from hardware development to software development brings providers unprecedented flexibility and agility in addressing market needs. It also simplifies deployments if a single box — a so-called universal CPE, or uCPE — can run the branch end of any service needed.

VNFs: Making services more modular

Virtualization, uCPE and vCPE are creating new opportunities for both providers and customers to adopt new services, try new platforms and transform their IT infrastructures.

The traditional software-only delivery of a network function has focused on virtual appliances, which tend to fully replicate the functions of a hardware appliance in a single unit. The network functions virtualization approach separates the appliance into smaller function parcels — virtual network functions (VNFs) that cooperate to deliver network functionality.

A service provider can dynamically push VNFs down to the CPE platform to run on the customer premises, run the VNFs in server resources on the provider side of the edge or even run them in their own core — wherever makes the most sense for that service and that customer. Firewall functionality, for example, can often be best delivered on the provider side of a link — why deliver a packet that will be thrown away as soon as it hits the CPE? But compression services are best delivered on the customer side to maximize their effect.

Changing the WAN

Virtualization, uCPE and vCPE are creating new opportunities for both providers and customers to adopt new services, try new platforms and transform their IT infrastructures. Enterprises are keenly interested in software-defined WAN right now, and many providers use a vCPE model to deliver SD-WAN.

Some providers adopt a fully edge-hosted model, in which a uCPE box hosts a complete SD-WAN package — one that could run on dedicated hardware. Others deploy a hybrid edge or cloud model, where the SD-WAN depends — to some extent — on services delivered from the provider’s cloud. Still, others have a fully cloud-hosted model, like network-as-a-service providers delivering SD-WAN as a feature set service.

Whichever model a service provider uses, the number and breadth of vCPE deployments are exploding in the wake of providers’ internal SDN transitions and with the strength of interest in SD-WAN.

This was last published in January 2019

app virtualization (application virtualization)

What is the biggest benefit of application virtualization?

App virtualization (application virtualization) is the separation of an installation of an application from the client computer accessing it.

Posted by: Margaret Rouse

WhatIs.com

Contributor(s): Jack Madden and Robert Sheldon

…continue reading on the definition…

From the user’s perspective, the application works just like it would if it lived on the user’s device. The user can move or resize the application window, as well as carry out keyboard and mouse operations. There might be subtle differences at times, but for the most part, the user should have a seamless experience.

How application virtualization works

Although there are multiple ways to virtualize applications, IT teams often take a server-based approach, delivering the applications without having to install them on individual desktops. Instead, administrators implement remote applications on a server in the company’s data center or with a hosting service, and then deliver them to the users’ desktops.

To make this possible, IT must use an application virtualization product. Application virtualization vendors and their products include Microsoft App-V, Citrix XenApp, Parallels Remote Application Server, and VMware ThinApp or App Volumes — both of which are included with VMware Horizon View. VMware also offers Horizon Apps to further support app virtualization.

The virtualization software essentially transmits the application as individual pixels from the hosting server to the desktops using a remote display protocol such as Microsoft RemoteFX, Citrix HDX, or VMware View PCoIP or Blast Extreme. The user can then access and use the app as though it were installed locally. Any user actions are transmitted back to the server, which carries them out.

Benefits of app virtualization

App virtualization can be an effective way for organizations to implement and maintain their desktop applications. One of the benefits of application virtualization is that administrators only need to install an application once to a centralized server rather than to multiple desktops. This also makes it simpler to update applications and roll out patches.

In addition, administrators have an easier time controlling application access. For example, if a user should no longer be able to access an application, the administrator can deny access permissions to the application without having to uninstall it from the user’s desktop.

App virtualization makes it possible to run applications that might conflict with a user’s desktop applications or with other virtualized applications.

Users can also access virtualized applications from thin clients or non-Windows computers. The applications are immediately available, without having to wait for long install or load operations. If a computer is lost or stolen, sensitive application data stays on the server and does not get compromised.

Drawbacks of app virtualization

Application virtualization does have its challenges, however. Not all applications are suited to virtualization. Graphics-intensive applications, for example, can get bogged down in the rendering process. In addition, users require a steady and reliable connection to the server to use the applications.

The use of peripheral devices can get more complicated with app virtualization, especially when it comes to printing. System monitoring products can also have trouble with virtualized applications, making it difficult to troubleshoot and isolate performance issues.

What about streaming applications?

With streaming applications, the virtualized application runs on the end user’s local computer. When a user requests an application, the local computer downloads its components on demand. Only certain parts of an application are required to launch the app; the remainder download in the background as needed.

Once completely downloaded, a streamed application can function without a network connection. Various models and degrees of isolation ensure that streaming applications do not interfere with other applications, and that they can be cleanly removed when the user closes the application.

This was last updated in September 2017

Continue Reading …

About app virtualization (application virtualization)

Related Terms

Goliath Technologies

Goliath Technologies is a performance monitoring software vendor that offers tools to monitor and analyze data centers, back-end … See complete definition

Parallels

Parallels is an application and desktop virtualization software vendor that offers management and delivery platforms for Apple … See complete definition

Remote Desktop Services (RDS)

Remote Desktop Services (RDS) is an umbrella term for several specific features of Microsoft Windows Server that allow users to … See complete definition

Windows Autopilot -definition

How does Windows Autopilot stack up against other vendors’ enrollment and configuration tools?

Windows Autopilot is a desktop provisioning tool native to Windows 10 that allows IT professionals to automate image deployment of new desktops with preset configurations.

Posted by: Margaret Rouse

WhatIs.com

Contributor(s): John Powers

…continue reading on the definition…

With Windows Autopilot, IT professionals can set new desktops to join pre-existing configuration groups and apply profiles to the desktops so new users can access fully functional desktops from their first logon. Windows Autopilot can simplify the out-of-box experience (OOBE) for new desktop users in an organization.

To use Windows Autopilot, IT must connect the devices to a Microsoft Azure portal and enroll them in Microsoft Windows Azure Active Directory. Once IT enrolls the device or devices, it can assign a desktop image to each user before users register their devices. With the images in place, the final step is for users to log on and input their company credentials for identity verification.

How it works

The Windows Autopilot device registration process begins with IT logging a new device’s hardware ID and device type. Windows Autopilot requires IT professionals to add this information in the form of a comma-separated values (CSV) file to their organization’s Windows Autopilot registry.

Each device needs a Windows Autopilot profile as well, which defines the terms of the device’s desktop deployment. The profiles can overwrite local desktop administrator privileges, disable Microsoft Cortana and other local applications, apply custom privacy settings, and more.

Autopilot allows IT to create custom deployment profiles.

Once IT professionals define the profiles for each new device they configure with Windows Autopilot, they should wait for the user to first access the device before taking any further action. Once the user accesses the device and loads the new desktop, IT can perform its typical endpoint management practices.

Windows Autopilot includes several zero-touch provisioning (ZTP) features such as self-deploying mode for the simplest experience for the end user. All the users have to do is input their information and watch as the desktop provides updates on their enrollment status. 

Benefits/drawbacks

Windows Autopilot lets IT professionals quickly deploy profiles to new devices with basic profile settings. The simplistic CSV file format for inputting new devices allows IT to deploy preset profiles to a large number of devices all at once.

Windows Autopilot doesn’t offer as many desktop profile options as some other configuration tools. Windows Configuration Designer, for example, is a tool designed for provisioning in bring your own device (BYOD) use cases that provides a high level of control over each profile by allowing IT to alter more specific configurations. Microsoft Intune, an enterprise mobility management platform in Microsoft’s cloud-based Enterprise Mobility and Security offering, provides additional mobile device management options.

IT professionals can use Windows Autopilot with these additional utilities to perform different provisioning functions. Windows Autopilot offers quick configurations and a simple OOBE for end users, and other tools target more complicated desktop image profiles or profiles for mobile devices. This was last updated in December 2018

Continue Reading …

About Windows Autopilot

Related Terms

application performance monitoring (APM)

Application performance monitoring (APM) is software designed to help IT administrators ensure that the applications users work … See complete definition

desktop computer

A desktop computer is a personal computing device designed to fit on top of a typical office desk. See complete definition

Microsoft Windows System Image Manager (SIM)

Microsoft Windows System Image Manager (SIM) is a tool in the Windows Assessment and Deployment Kit IT professionals can use to … See complete definition

How to address endpoint security issues caused by users

Which end-user security mistakes have you come across most often?

Certain behaviors, such as ignoring patches, create security issues on the endpoints users work with. IT should enforce policies that prevent users from taking these damaging actions.

Kevin Beaver

Principle Logic, LLC – SearchSecurity

A crucial function of endpoint security is protecting users from their own mistakes and missteps.

From human error to technical oversights and weaknesses in business processes, there are many ways that users can cause endpoint security issues. Users can make mistakes even if they understand the risks to the business because their desire for expediency and instant gratification is too strong. Some of the problems are the same behaviors IT professionals have been fighting for decades, but others aren’t as obvious.

There’s no amount of security awareness and training that will make this go away completely, but IT professionals must understand each of the endpoint security issues users might cause and the best practices for handling them.

Endpoint security issues caused by users

Choosing weak passwords. Password policies for Windows domains, websites, applications and mobile devices are often lax. Users follow whatever guidance they are given even if it’s not good advice. This leads them to create passwords that hackers can easily guess or crack. Users share the passwords between systems sometimes — mixing both personal and business passwords — and might write them down and store them on sticky notes.

Ignoring patch notifications. Because most users don’t see the value in running patches and rebooting their desktops and apps, they likely ignore notifications for patches whether the patches are for desktops, such as Microsoft Windows or Apple macOS, or third-party software, such as Java and Adobe Acrobat Reader. Doing so creates security vulnerabilities in the endpoints.

Clicking links and opening attachments without question. It’s so simple for hackers to get into a network by phishing users. Users might click malicious links, open unknown attachments or even provide their login credentials when prompted. If phishing security is not up to snuff, no other security controls matter because once an attacker has a user’s login information, he has full access to the endpoint. If phishing security is not up to snuff, no other security controls matter.

Bypassing security controls. Most of the time, endpoints automatically give users local administrator rights. With these rights, users can perform tasks that are ultimately harmful to their endpoint’s security, such as disabling antimalware software and installing their own questionable software.

Unfortunately, it can be difficult to detect the harmful changes a user might make on his device if he has local admin rights. As a result, IT might not realize that a user has done something dangerous, which could leave business assets exposed.

Connecting to unsecured Wi-Fi. Users might connect to practically any open wireless network without question if it means they can access the internet. Even if IT instructs users to verify their connections and to only use trusted Wi-Fi networks, all those teachings go out the window the second a user only needs to get online for a few minutes to check email or social media.

Buying and selling personal computers without resetting them. It’s amazing how many people don’t reset their computers by reinstalling the OS when they sell them. Users who do not reinstall the OS expose personal information and place business assets, such as virtual private network connections, at risk. It is dangerous to recycle old computers without taking precautions.

How can IT address these endpoint security issues?

Users can be careless and often take the path of least resistance simply because it’s most convenient. In reality, a small number of people and choices cause the majority of endpoint security issues.

IT can’t control user behavior, but it can control users’ desktop permissions. IT professionals must enforce security policies that prevent users from taking harmful actions rather than only telling users how to avoid those actions.

To effectively prevent these endpoint security issues, IT must determine what specific user actions are undermining the security program. IT pros should create processes and controls to prevent user mistakes, evaluate how effective they are and make alterations when necessary to ensure that the policies can handle the latest security threats.

This was last published in December 2018

Are hyper-converged infrastructure appliances my only HCI option?

Do you prefer to buy preconfigured hyper-converged appliances, deploy software-only HCI or build your own configuration?

Preconfigured hyper-converged appliances aren’t your only option anymore. Software-only and build-your-own hyper-converged infrastructure have unique pros and cons.

Alastair Cooke

SearchVirtualDesktop

Freelance trainer, consultant and blogger specializing in server and desktop virtualization

There are multiple ways to approach a hyper-converged infrastructure deployment, some of which give IT a little more control.

When we talk about building a hyper-converged infrastructure (HCI), the mental image is usually deploying some physical appliances using high-density servers and spending a few minutes with some wizard-driven software. But buying hyper-converged infrastructure appliances is just one way to do it.

As an IT professional, you can also deploy software-only HCI on your own servers. Or you can start from scratch and engineer your own infrastructure using a selection of hardware and software. The further you move away from the appliance model, however, the more you must take responsibility for the engineering of your deployment and problem resolution.

Let’s look more closely at hyper-converged infrastructure appliances and some do-it-yourself alternatives.

Preconfigured hyper-converged appliances

Hyper-converged infrastructure appliances wrap up all their components into a single order of code. The vendor does all of the component selection and the engineering to ensure that everything works together and performs optimally.

Usually, the hyper-converged appliance has its own bootstrap mechanism that deploys and configures the hypervisor and software with minimal input from IT. For many customers, this ease of use is a big reason for deploying HCI, making it possible to largely ignore the virtualization infrastructure and focus instead on the VMs it delivers.

Software-only HCI

One of the big reasons for selecting a software-only hyper-converged infrastructure is that it offers hardware choice. You may have a relationship with a preferred server vendor and need to use its hardware. Or you may simply want an unusual combination of server hardware.

Another example is that you may want a lower cost, single-socket server option, particularly if you are deploying to a lot of remote or branch offices. If you are deploying to retail locations, you may need servers that will fit into a shallow communications cabinet rather than a data center depth rack. Once you select your hardware, you are responsible for the consequences of those choices.

Once you select your hardware, you are responsible for the consequences of those choices. If you choose the wrong network interface card or a Serial-Attached SCSI host bus adapter, you may find support is problematic, or performance may not match your expectations.

HCI from scratch

You can also build your own hyper-converged infrastructure using off-the-shelf software and hardware, a hypervisor and some scale-out, software-defined storage (SDS) in VMs.

As with software-only HCI, you are taking responsibility for this decision and its consequences. You can probably buy support for the hypervisor and the SDS, but what about potential interoperability issues between the layers? What is the service level for resolving performance problems?

Building a platform from scratch instead of buying preconfigured hyper-converged infrastructure appliances is only sensible if you have your own full support team providing 24/7 coverage.

This was last published in March 2018