What priority should an IT operations team give to security tasks?
To help secure data and applications, an IT ops team needs to do much more than put up firewalls and apply other traditional security measures. Here’s where to begin.
VP of Technology – ACI Information Group – SearchCloudApplications
Specter, Meltdown and similar zero-day vulnerabilities are the scary sorts of things that keep operations teams — especially those with IT security roles — awake at night. Fortunately for most cloud-based companies, those vulnerabilities can be addressed with the latest software updates or an adjustment to your Amazon Elastic Compute Cloud machine images. Organizations that run on serverless platforms have it even easier, needing only to wait for Amazon, Microsoft or Google to apply the patches to the underlying hardware.
Still, these vulnerabilities account for only a small fraction of the attack surface that modern-day operations teams must watch over.
To take their IT security roles seriously, these staffers need to be concerned with stolen credentials and corrupted code repositories, among numerous other threats. Custom alerts can help ops teams detect abnormal conditions, and software-testing procedures can be adjusted to include security risk detection. There’s plenty for ops to do.
Consider a Node.js application launched using the serverless platform on AWS Lambda. Every dependency included in an application could potentially become compromised and lead to malicious code being installed on your site. Such a calamity could result in the loss of your users’ data or your intellectual property.
Systems for continuous integration (CI) and continuous delivery (CD) allow developers to iterate much faster. This is generally a good thing, since it produces much smaller deployments and generally results in fewer bugs. Unfortunately, these CI/CD tools tend to rely on third-party systems to gather packages and requirements, and those repositories can become compromised.
One recent outage, for example, showed a critical vulnerability in the NPM code repository. Supposedly safe packages were replaced with nearly identical ones containing attack code. Since NPM packages can include build and deployment hooks as well, this could do anything from stealing AWS credentials used to deploy your application to harvesting credit card numbers and passwords. Even packages you’ve completely validated as safe and have been using for years could have been compromised during a new deployment.
Previously, operations teams could mitigate some of this risk simply by controlling the hardware. Also, they could put in place specialized firewalls to prevent suspicious network traffic from causing issues, such as a site trying to upload credit card numbers to a known malicious IP address. With the move to cloud serverless technologies, much of this control has been taken away from ops, even while their IT security roles remain.
Adding Detection to the CI/CD Process
For teams with well-defined CI/CD practices, the build process should already have automated unit testing in place for bugs. It’s a natural progression to also require that build step to add in tests for security vulnerabilities. Many tools and organizations can help with this sort of thing, including Snyk and the Open Web Application Security Project, or OWASP. Ops teams are typically responsible for setting up these types of tools, and many of them can be set to run one-time scans before a build, as well as perform ongoing checks of production systems.
Additionally, ops teams with IT security roles or concerns may choose to create a custom in-house repository. For example, NPM Enterprise allows companies to include a feature-compatible version of NPM. This can be maintained by an internal team, behind a firewall, and it prevents the installation of third-party plug-ins that aren’t pre-approved. This can lead to faster, more secure and more reliable deployments.
Anomaly detection and manual approval of suspicious requests can be useful in preventing unwanted activity.
Some attacks result from things that cannot be identified before a system is in production. For example, users’ accounts can be breached. Or, even worse, a developer’s account can be compromised.
With AWS, it’s critically important that each service has strict identity permissions. For example, a user’s API probably shouldn’t have the ability to create new Elastic Compute Cloud instances or to delete users. Developers should be brought along slowly and not granted write access until after they’ve proven they aren’t going to accidentally wipe out the entire database. And no one should have root AWS credentials, except maybe the CTO.
It’s also important to make sure all identity and access management (IAM) users are required to have multifactor authentication (MFA) tokens set up, and it may be useful to turn on S3 versioning as well as require an MFA token to delete S3 objects.
It’s always a good idea to back up critical data in another location — and encrypt it, if it’s sensitive. It’s important to note, however, that when you store backups in different locations, you’re increasing the exposure of that data to attackers. More backups are not always better.
Most cloud providers offer managed backup options. Those should always be the first choice.
Monitor for unusual activity
Even with strict policies in place and personnel focused on IT security roles, it’s inevitable that something will go wrong. Credentials will either be leaked accidentally or be exposed through some malicious code installed on someone’s computer.
It’s important for operations teams to monitor cloud activity. For AWS users, this is typically done via CloudWatch. In Azure, consider Operational Insights, Application Insights or other monitoring tools.
It’s also worth setting up custom alarms. These help you spot abnormalities, such as when an IAM account performs operations that deviate from normal patterns; that unusual behavior could indicate a system compromise.
It can be trickier to identify issues with end users. While some problems can be obvious — such as when a user tries to log into their account from China an hour after logging in from California — other situations aren’t as readily apparent. Anomaly detection and manual approval of suspicious requests can be useful in preventing unwanted activity. Several services can help manage these types of rules, and most authentication services, such as Auth0, already provide built-in anomaly detection.
Web application firewalls can also provide added protection to your web-based access points by blocking traffic using pre-defined rules from communities, as well as custom logic based on patterns your operations team identifies.
For example, if someone is trying to access a wp-admin URL on your custom in-house application, chances are they’re trying to hack into something. Many of the targeted vulnerabilities are for WordPress and applications in the PHP scripting language, so operations teams should be on the lookout for requests for suspicious URLs and be ready to block all traffic from offending IP addresses.
This was last published in May 2018