Four situations where a multi-cloud strategy makes sense

Many businesses can benefit from using multiple cloud services, especially if they need to ensure reliability, protect privacy, remain flexible and optimize the cloud experience.

continue reading—

If one cloud is good, then using multiple clouds should be even better, right? Well, not always. But there certainly are circumstances where such a strategy makes sense. IT teams should consider a multi-cloud strategy when they need to ensure service reliability, meet privacy requirements, incorporate more flexibility or optimize their use of cloud services.

Ensure service reliability

Organizations first started using multiple cloud services because they were uncertain about the cloud’s reliability. They wanted to protect against data loss and ensure business continuity in the event of disaster. Even if a cloud provider offers data centers across multiple regions and can ensure a secure level of redundancy, the possibility still exists that an event, such as a zero-day attack or a rogue employee incident, could impact the data at a global level.

Organizations that store data in the cloud should have a disaster recovery strategy in place that ensures against the loss of data critical to business operations. Although relying on a single provider with data duplicated across multiple regions is better than no redundancy, multiple services reduce risks even further.

IT teams should think hard about using a single provider to store all copies of their data. Even if an event doesn’t result in permanent data loss, service providers can still experience a temporary disruption in services. All it takes is a few hours of downtime to affect operations. By implementing a failover strategy across multiple cloud platforms, an organization can more easily keep applications running and employees productive, regardless of what types of disruptions occur.

According to RightScale’s 2017 State of the Cloud Report, 85% of enterprises have a multi-cloud strategy.

Meet privacy requirements

Organizations hosting data in the cloud are subject to industry regulations and policies that govern how and where private information can be stored. For example, some countries require that data be physically contained within that country or region. An organization that offers global services might have a difficult time finding a cloud provider that can meet the data sovereignty requirements across all regions.

A multi-cloud strategy lets an organization meet these requirements on a case-by-case basis, while remaining flexible enough to address changing rules and laws. If an organization is locked in to a single vendor and regulations change, the migration can be difficult. However, if the IT team has already implemented a multi-cloud structure that supports flexible data migrations, switching to a new provider is much less painful.

Organizations with privacy concerns can also use a multi-cloud environment to break up sensitive data and distribute it across multiple platforms. For example, you can use erasure coding to break data into fragments and store it across different locations. In this way, no single cloud service has a complete copy of the data. Even if a provider’s data center is breached, the attacker cannot read sensitive information.

Incorporate more flexibility

Avoiding vendor lock-in is a primary reason organizations turn to multiple cloud services.

Avoiding vendor lock-in is a primary reason organizations turn to multiple cloud services.

IT teams that get locked in to a single storage platform can have a difficult time moving data and applications to other platforms once they’ve committed to a single infrastructure. A multi-cloud approach leads to increased flexibility and less reliance on any one vendor, while offering greater portability among heterogeneous environments. Moving applications and data from one cloud to the next — or even from on premises to the cloud — is much simpler and faster with a multi-cloud architecture in place.

Having this type of flexibility also makes it possible to respond quickly to changing or temporary business requirements. For example, it’s easier for an IT team to accommodate unexpected demand for increased storage capacity as a result of a new sales campaign or to provide temporary resources for one-off projects, such as testing a new analytics offering.

Optimize cloud services

Cloud providers’ subscriptions vary depending on resource usage, application types and other factors. Organizations that implement a multi-cloud strategy can take advantage of the most cost-effective services available based on their storage and workload requirements.

They can also gain performance advantages with multiple cloud services. For example, an organization can choose providers based on where their data centers are located, making it possible to host applications and data nearer to the users accessing them. In addition, some providers might be able to address specific storage and application requirements more efficiently than others, offering better performance for certain workload types. Using multiple clouds makes it possible to choose the best performing platform based on current needs and available services.

Cloud providers also vary in the types of features and services they provide. Some might offer better business intelligence and machine learning capabilities, while others might provide better storage options across more regions. A multi-cloud strategy allows you to use the cloud provider that offers the best service for a specific need, without having to settle for one service that provides only adequate capabilities in several areas.

Opting for a multi-cloud strategy

If you want to get the most out of your cloud services, you should give a multi-cloud approach serious consideration. That way, if one cloud platform doesn’t meet a specific need, you can move to the next platform.

But a multi-cloud setup isn’t always the right answer. Implementing an effective multi-cloud strategy is no small task, and an IT team must have the resources and expertise necessary to ensure that all the pieces fit together neatly and securely. In addition, there are benefits to using one cloud provider. A large provider can offer certain discounts for using multiple services that you may not be able to get by spreading your services across providers. Also, having one provider lets you have a centralized management console for all services that isn’t yet available across multiple clouds.

Even so, using multiple clouds offers clear advantages for businesses of all types. Using multiple clouds can help ensure reliability, protect privacy, remain flexible and optimize the cloud experience.

This was last published in April 2018

Multi-cloud vs. hybrid cloud: Assessing the pros and cons

Both multi-cloud and hybrid cloud architectures provide businesses with flexibility. The degree to which the public and private clouds involved are integrated is a differentiator.

continue reading…

Multi-cloud is an IT term du jour. Of course, we already have public cloud, private cloud, enterprise cloud and hybrid cloud, but those apparently don’t describe architectures that really embrace the cloud. So what exactly encompasses the multi-cloud wonderland and how does it compare to other cloud options?Obviously, a multi-cloud includes multiple clouds. So does a hybrid cloud. But when it comes to multi-cloud vs. hybrid cloud, there’s a key difference that’s nudging the market to focus more on multi-cloud.

A hybrid cloud is a single entity, defined as an amalgamation of a private cloud environment with one or more public cloud environments. These can be any combination of software as a service, IaaS, PaaS and any other as-a-service environment you can conceive. But, it’s a singular noun, describing a singular entity.

Multi-cloud, by nature, isn’t one thing, but rather a series of entities that must be brought under centralized management.

To some extent the multi-cloud vs. hybrid cloud discussion is semantics, and you can safely interchange the two terms. But a hybrid cloud usually includes a combination of public and private clouds. Multi-cloud makes no distinction between the kinds of clouds that you operate. Perhaps your multi-cloud doesn’t have a private cloud environment at all, and you operate everything on AWS and Microsoft Azure with a little bit of G Suite thrown in. That’s a multi-cloud environment. Ta-da!

There’s another difference to be aware of when looking at multi-cloud vs. hybrid cloud. In a multi-cloud environment, the individual clouds may not be integrated with one another. That’s part of the reason for the plurality in multi-cloud as opposed to the singularity of hybrid cloud. In a hybrid cloud environment, one of the sometimes incorrect assumptions is that the cloud components are integrated to form the cohesive singular entity.

Cloud evolution

As the way people think about how the cloud has changed, the terms used to describe it have evolved as follows:

  • With a private cloud, everything is inside an organization’s data center. Services have their own sandboxes, and application design is monolithic.
  • A public cloud is external to the data center. It’s service- and app-centric with lines between each app. Cloud-native applications are more modular, but people still treat the environment as a data center rather than changing their thinking.
  • A hybrid cloud is a bit of both. Each side is separate, but they form a greater whole. It’s still app-centric for the most part, but infrastructure integration is greater. And it has the beginnings of distributed application support.
  • With multi-cloud, applications can span clouds, but they don’t have to. Components of an application live wherever it makes sense. People don’t see data centers anymore, but they view the multi-cloud as a massive fabric that binds together application components.

This list is intended to show a progression and isn’t comprehensive.

Multi-cloud vs. hybrid cloud: The upsides

A hybrid cloud provides an organization with the flexibility to use services from and deploy workloads to both on-premises private clouds and public clouds. For instance, a mission-critical workload with significant security requirements can be deployed to the private cloud, where the business retains control over the infrastructure and software stack. Other workloads, such as web servers and test environments, may be deployed to a public cloud. This frees an organization from having to invest in a full private cloud infrastructure for every workload and lets it pay only for the resources it uses for workloads that can be deployed to the public cloud.

In addition, a hybrid cloud lets an organization take advantage of the scalability that the public cloud offers to do something like process infrequent, but intensive, big data analytics that involves creating a large Hadoop cluster. Hybrid clouds also let businesses share resources among clouds. They can use a private cloud to run a workload even while that workload’s data is stored in the public cloud. They can also migrate a workload between public and private clouds to take advantage of fluctuating resource costs and network traffic levels.With multi-cloud, the world becomes your playground. You get the most comprehensive mix of public and private clouds, and you don’t necessarily need to deeply integrate them. Of course, depending on how you use such services, you may want to integrate them, but it isn’t required by definition. For example, you may want to deploy different parts of a distributed application in multiple clouds in order to protect against the failure of one.

A multi-cloud approach also provides organizations and application developers with the ability to pick and choose the discrete components that will comprise their applications and workloads. There are no more technical barriers to leap over, and developers can select specific services that meet their needs rather than settling for what a single provider offers.

The downsides of cloud options

For all the upsides of both approaches, in the multi-cloud vs. hybrid cloud debate, there are also downsides. Hybrid clouds can be complex to implement and maintain. Deploying the private cloud piece of the hybrid setup can be challenging in itself. It requires an extensive infrastructure commitment and significant staff expertise. On top of that, to be considered a hybrid model, the private cloud must be integrated with at least one public cloud to the extent that the underlying software stacks work together. As the private cloud is integrated with multiple public clouds, it becomes even more challenging and complex.

Hybrid clouds also present their own management, security and orchestration challenges. To maintain a reasonable level of efficiency, most organizations will want to integrate both sides of the cloud as deeply as possible. This would require a hybrid approach that enables federated and consistent identity management and authentication processes. Depending on the service you’re integrating, you may also need to worry about other potential vulnerabilities, such as securing API traffic exchanges. On the orchestration side, a hybrid cloud might require an intelligent workload deployment tool that’s able to determine deployment targets based on costs, security, traffic, the availability of public clouds and other criteria.

Overcoming Hybrid Cloud Hurdles
In this Storage Decisions Tech Talk, George Crump discusses various difficulties with hybrid cloud and how to overcome them.

George Crump, president of analyst firm Storage Switzerland, looks at what’s keeping some IT groups from implementing hybrid clouds.

Using a multi-cloud setup opens a floodgate of security issues. The more clouds you consume, the bigger the security challenge. Remember, in security, the attack surface is the potential impact zone for hackers. The more services you add to your multi-cloud environment, the bigger the attack surface, and the more opportunity you provide for a bad guy to find a weak link.

Also, costs can spiral out of control with multi-cloud if you’re not careful. Skyrocketing cloud bills often take people by surprise. Using multiple clouds makes the situation worse. A poorly constructed database query that uses up CPU cycles in one of those locations can wreak havoc on your budget.

Finally, there’s the issue of governance. The right governance and oversight can counter many of the downsides, but a lot of organizations do governance poorly, and some developers still equate governance with command and control efforts. Nothing could be further from the truth. Governance is the creation of a foundation for future success, while command and control is an equation for long-term mediocrity driven by the wrong people. Good governance will help developers and the organization better focus on outcomes that are positive for the business and that don’t come with unacceptable levels of risk.

This was last published in June 2018

Cloud for the win when it comes to information parking

Information storage on the cloud is often the way to go nowadays

Base your decision on where to park your business’s critical data and apps on where they can be most effectively accessed and used. For enterprises today, that’s frequently the cloud.

continue reading…

As recently as five years ago, a large majority of companies kept their most important information assets in the data center, where IT managers believed they could best manage, protect and control data and applications. That is changing, however, as the cloud continues to play an ever-more strategic role in IT investments. When it comes to the choice of where to park their data, companies are increasingly choosing the public cloud over on-premises storage.One major reason for this shift is clear: Data follows apps, and apps are moving to the cloud. A growing number of organizations now see information storage on cloud services as their preferred platform for new app development, taking advantage of Agile development methodologies and rapidly maturing container and microservices technologies.

The gravitational pull of the cloud is strong for existing apps as well. A little more than four out of 10 IT respondents in a recent Taneja Group survey on public and hybrid cloud deployments said they already moved at least some apps to a software as a service (SaaS) deployment model, while over a third plan to lift and shift apps to run on public cloud infrastructure (see the “Plan for using public cloud infrastructure”).

A second big reason for this shift to the cloud is the sheer amount of data created and housed in information storage on the cloud. As public cloud vendors launch new database, data warehousing and similar services, new data creation has started happening in a big way. Organizations also generate a large volume of unstructured content on websites, social media and similar online activity in the cloud. Think of how many people now store home videos, photos and music in the cloud compared to just a few years ago. Add to that the data streamed to, collected and analyzed in the cloud from IoT, mobile devices and other telemetry apps at the edge. That makes it easy to see how the critical mass of new data creation has shifted from on premises to the public cloud.

Let the use case be your guide

One way to determine where to park information is to base the choice of parking spot on how, where and when that information (apps and data) will most effectively be accessed and used. If a use case happens in information storage on the cloud, then your data will likely need to be based there as well. Let’s consider a few common use cases finding growing adoption in the cloud:

Dev/test. From its earliest days, the public cloud provided a productive environment for the development of new cloud-native apps and the migration and testing of legacy workloads. Going forward, well over a third of organizations plan to develop and deploy new apps there.

Data analytics. The cloud is an ideal place to perform analytics of all kinds — irrespective of data type, volume or toolset. It provides an agile and elastic repository for data to be processed, visualized and correlated. Data lakes, for example, are taking off in the cloud, as users recognize inherent advantages such as scalability, pay-as-you-go consumption and the broad availability of analytics services.

File sharing and collaboration. Information parked in the cloud is both highly accessible and easily shared. Unlike traditional on-premises approaches, users can choose from a wide variety of file hosting and sharing services in the cloud to meet the requirements of each specific project or use case. The options allow them to achieve the right balance of features vs. cost.

How IT plans to use public cloud infrastructure in 2018 to 2019

Compliance. The public cloud is becoming a versatile platform for complying with all manner of industry regulations and for satisfying data sovereignty rules. All major public cloud app providers must now comply with the European Union’s GDPR, further enhancing data privacy and user control over how data is stored and consumed. Cloud providers have also popped up region by region and country by country to address data sovereignty concerns.Data protection. Information storage on cloud services can be protected in many ways. These range from traditional snapshotting and replication between on premises and the cloud or between clouds to the use of traditional backup software. Many companies today also replicate key data from cloud to cloud to enable recovery from a major outage or other disruption.

Data archiving. With services such as Amazon Glacier and Azure Archive Storage, public clouds have become cost-effective repositories for archiving information. This archived data can be semi-active, near-line data that might need to be occasionally accessed or inactive or historical data that must be preserved for a certain time period. Information that begins its life as primary data will eventually pass through these various phases of the data lifecycle. And in an overwhelming majority of instances, the cloud is better equipped to address these phases. From initial creation of a piece of user or application data to ultimate archiving or deletion, the cloud offers the infrastructure, tools and services to support it, whether it is ready to be analyzed, processed, managed or stored.

Parking in cloud vs. on premises

When it comes to the choice of where to park their data, companies are increasingly choosing the public cloud over on premises.

 

Not convinced? Let’s consider what you lose by keeping your information on premises. For one thing, you may lose the ability to take advantage of cloud-based innovations such as new analytics approaches or AI or machine learning toolsets introduced by cloud providers. Major providers have and continue to launch a steady stream of services and are attracting data of all types and in all stages of the lifecycle. For example, think of a machine learning application in the cloud that analyzes large volumes of incoming sensory or experiential data. The more data such an app touches, the greater its effectiveness. By having that data flow into and analyzed in the cloud, you can take direct advantage of the provider’s machine learning app, which is only going to get smarter over time. If that data instead were to be transmitted to systems parked on premises, the gain in knowledge and value could be lost.

Second, by retaining information in the data center, you lose the scalability and agility advantages the cloud provides for various usage scenarios. For instance, data parked in information storage on the cloud platforms is generally more broadly accessible and simpler to share and repurpose, boosting its utility. SaaS-based apps are designed to enable collaboration across such data sets, further enhancing the value of the information.

When the cloud is a no-parking zone

Are there types of data, use cases, industries and so on where it doesn’t make sense to park information in the cloud? The answer is yes, for a variety of reasons.

For example, an enterprise might choose to keep particularly sensitive business or technical data on premises for compliance or security purposes. Others have cost and lock-in concerns. In certain cases, the monthly cost of storing and processing large or rapidly growing data sets in the cloud can be high compared to on premises. This is partially because, in response to the cloud, on-premises vendors have started to offer steadily more cost-effective and easy-to-use storage, database and analytics products that can be more closely managed when running in the data center.

If you anticipate needing to eventually move large portions of your data out of the cloud — either to another cloud or back on premises — you can avoid large egress charges by parking that information in your data center instead of the cloud. Also, organizations in certain industries, such as healthcare and government, are restricted in the type or level of information they can host and run in the cloud.

In short, although the cloud is proving a good fit for a growing number of use cases, apps and data, some information is best still parked on premises.

Information can move freely and easily within a given cloud, and even pass through various stages of its lifecycle without being constrained to a single place. Data can also be filtered, analyzed, collated and transformed without oversight of a human attendant. Despite this dynamic milieu, rich metadata helps track and manage your information in all its derivative forms.

Moving to cloud-first

Companies adopting a cloud-first mentality will invest in the cloud for new apps and use cases whenever possible and thereby benefit from parking their information there.

But most businesses are still biased toward running a significant percentage of business apps — including some of the most critical ones — on premises. So how can you reap the benefits of making the cloud the focal point for where you access, process and manage your data without disrupting existing business processes and workloads?

You can start by making the cloud the parking place of choice for new workloads and data, particularly for apps born in the cloud. If you choose to migrate existing business apps from on premises to the cloud, the data should follow.

If your organization is like most, however, you may decide to maintain some business-critical apps on premises for now while starting to move secondary use cases such as data analytics or archiving to the cloud. In this case, your primary data for such apps will continue to live in the data center, but data might be copied and moved to the cloud. That lets you take advantage of in-cloud analytics or archive data there permanently as it becomes less active.

The cloud as an information parking lot

Companies used to worry about the risk and cost of deploying and running key information assets in the cloud. Now they can ill afford not to. Major public clouds have matured into enterprise-ready platforms, not just for development and testing of new apps, but also for deployment and support of existing workloads. Most major commercial apps are now available in the cloud in the form of IaaS or SaaS deployments. And as customers move apps to the cloud, their data will follow.

The cloud can already support almost any usage scenario, in many cases better than on premises. So as data is created and progresses through its lifecycle, it should increasingly live in information storage on the cloud, close to the apps it supports.

So where should your information reside, in the data center or in the cloud? Although for most enterprises the answer clearly remains “It depends,” we think you’ll find the cloud to be the best parking spot for most of your company’s information going forward.

This was last published in November 2018

infrastructure as code-Word of the Day Daily updates on the latest technology terms | November 6, 2018

infrastructure as code

Infrastructure as code (IaC) is an approach to software development that treats physical compute, storage and network fabric resources as web services and allows apps to run where they are best suited, based on cost and performance data.

Essentially, IaC negates the need for software engineers to be concerned with the physical location of infrastructure components. Instead, when a software application requests infrastructure to run, available services are located through an automated discovery process and resources are allocated on demand. When an infrastructure resource is no longer required, it is re-appropriated so it can be allocated to another application that needs it.

Examples of IaC tools include AWS CloudFormation, Red Hat Ansible, Chef, Puppet, SaltStack and HashiCorp Terraform. Each of these tools has its own way of defining infrastructure, and each allows an administrator to define a service without having to configure a physical infrastructure. These tools are also able to roll back changes to the code, should an unexpected problem arise when new code is released.

Some IaC tools rely on a domain-specific language (DSL), while others use a standard template format, such as YAML and JSON. When selecting an IaC tool, organizations should consider the target deployment. For example, AWS CloudFormation is designed to provision and manage infrastructure on AWS and works well with other AWS offerings. Alternatively, Chef works with on-premises servers and multiple cloud provider IaC offerings.

IaC can be managed through the same version control and automated testing procedures that developers use to maintain quality assurance (QA) in their continuous integration and continuous delivery (CI/CD) pipelines. As of this writing, there are no agreed-upon standards for implementing IaC and the concept is known by several other names, including composable infrastructure, programmable infrastructure and software-defined infrastructure.

Quote of the Day

“While IT organizations are catching on to the benefits of infrastructure as code, the majority haven’t achieved compliance automation, despite a swath of available tools for the job.” – Kurt Marko

continuous deployment-Word of the Day Daily updates on the latest technology terms | November 7, 2018

continuous deployment

Continuous deployment is a software release strategy in which any code commit that passes automated testing is immediately released into the production environment. Continuous deployment pipelines are managed with tools that emphasize code testing prior to and after deployment.

During the software development process, version control and build automation tools, such as Jenkins, can help ensure the smooth delivery of code. Monitoring tools that track and report changes in application or infrastructure performance due to the new code are also important.

Ideally, monitoring and incident response for continuous deployment setups should be as close to real time as possible to shorten time to recovery when there are problems in the code. In addition, it’s important to have rollback capabilities so that any unexpected or undesired effects of new code in production can be identified and fixed quickly.

Although continuous deployment emphasizes automated testing, organizations that implement the strategy also rely on human-driven controls to safeguard against user disruption. Popular strategies for integrating human employees into QA testing in a continuous deployment environment include canary deployments in which code is released to a small number of users so its impact is relatively small and negative impact can be reversed quickly. Some applications can deploy in containers, such as Docker, to isolate updates from the underlying infrastructure.

The difference between continuous deployment and continuous delivery

The terms continuous integration, delivery and deployment are collectively referred to as continuous software development and are associated with the Agile and DevOps software development methodologies.

While continuous deployment and continuous delivery sound similar and share the same acronym, they are not the same thing. Continuous delivery typically uses a production-like staging area for quality assurance (QA) testing. In contrast, continuous deployment does not require a staging area because automated testing is integrated early in the development process and persists on a continuous basis.

Quote of the Day

“Continuous deployment skips the operations oversight step in software production. So, automated tools must ensure code success in administrators’ stead — before mistakes go live.” – Adam Bertram

IaaS decision criteria-Your Atlassian products and IaaS:

You tell us that movement towards the cloud is increasingly becoming part of your team’s conversations; for the cost-savings, the change to reduce overhead, and the uptime and performance benefits. You also want to know more about the best way to leverage cloud capabilities for your Atlassian applications.

Using Atlassian software as a service through our cloud offerings is not the only way Atlassian customers are taking teamwork to new heights with cloud computing. More and more customers are choosing to deploy Atlassian tools using infrastructure as a service (IaaS) providers: 62% of Atlassian’s self-hosted customers are choosing to deploy their applications on a virtual architecture.

Infrastructure as a Service (IaaS) is a form of cloud computing that provides virtualized computing resources over the internet. IaaS is one of three main categories of cloud computing services, alongside Software as a Service (SaaS) and Platform as a Service (PaaS).

For companies who are either not yet ready to leverage SaaS solutions or cannot adopt it (due to the regulated nature of their industry, or geographic requirements for their company data), there are still ways to use cloud computing without giving up complete control over your software’s deployment.

IaaS allows companies to rent computing resources from providers who are solely dedicated to providing and maintaining infrastructure. This creates a few advantages, like money and administrator time-savings, for what can be an even more secure and reliable platform than achievable on premises. One Atlassian customer, media organization KQED, worked with iTMethods, an Atlassian Solution Partner, to migrate their on-premise Atlassian environment to AWS and reduced their run cost by 65%.

What should you keep in mind as you explore further? We’re here to help you find the best deployment option for your Atlassian application.

Start with your needs

For many IT teams, the transition to cloud is a gradual undertaking, and there are many advantages and risks to consider. Start with your organization’s unique preferences and requirements. Is data security a top concern? What about capacity for peak concurrent usage? How about avoiding downtime? Total application costs?

For each company, the priority of these needs can help determine which deployment options you may want to choose for your Atlassian applications.

Here are some factors to consider and prioritize with your team:

  • Payment terms: Pay as you go or fixed rates are available for dozens of public cloud services.
  • Security: Taking an application off premises doesn’t have to mean sacrificing the security of your data, but it depends on your organization-wide policies towards cloud services.
  • Reliability: Industry-leading public cloud providers boast at least a 99.9% uptime SLA.
  • Scalability: Renting infrastructure gives you the ability to scale up or down to meet the demand of your users on an Atlassian application.
  • Time saved for IT staff: Teams can prioritize other work over maintaining infrastructure.
  • Overall application costs: License, hardware, services, migration, and staff costs are all part of the decision of whether to consider moving to a public cloud.
  • Support & resources: Some public cloud vendors, like AWS and Azure, have templates for deploying Atlassian’s Data Center architecture quickly and easily.
  • Existing IaaS relationships: If your organization is already in a transition to cloud infrastructure, you may already have a vendor in mind that you’re looking to move your applications to. If so, it may be a good time to evaluate the specific benefits of Atlassian products in a public cloud environment.

Also, consider what your long-term plans are for your business. Do you have plans to grow the number of users on Atlassian’s products, or roll them out to more departments internally? Maybe you’re considering a Data Center product from Atlassian and are weighing self-hosting against IaaS. In any scenario, make sure to balance your present needs and priorities with the direction your teams are headed in the future.

Your choice of public cloud vendors

Your investment in IaaS is all about making the right choice for your business. Atlassian’s Data Center applications can be run on a number of different vendors. Here’s an example of three popular choices: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform. Of course, there are a number of other options that may fit your needs better, but the ones that we’re highlighting here have been surfaced by our customers very often.

Note on deployment options: This article focuses on the Data Center product line, Atlassian’s recommended option enterprise-grade deployments. Much of the same tenants apply to a single-server deployment.

An IaaS vendor can provide the required architecture elements for an Atlassian Data Center deployment, as well as optional “nice to have” services like containerization, automated scaling, and identity management. Here’s a look at the hardware options you might consider:

Data Center Architecture Elements

  • *AWS EFS is not supported for Bitbucket Data Center
  • ** MySQL is not supported for Bitbucket Data Center

Typically, Atlassian can’t provide much insight into hardware sizing and pricing, since so many variables in a product instance impact pricing. With IaaS, however, a few of those variables can be controlled. For a holistic understanding of any potential investment, it’s important that hardware costs are factored alongside the software license costs.

What other additional considerations should you make as you continue to weigh out the option of a public cloud for your Atlassian products? You might want to include your identity management requirements.

Identity management in an IaaS environment

One of the complicated aspects of using IaaS for your Atlassian products can be identity management (IDM). Often the system for managing identity in an organization is kept on premises even after other services have been taken to the cloud. This split means there needs to be a link between the identity store within an organization’s network and the applications outside of the firewall that need that information to grant access. This is an important consideration when deciding how to deploy using IaaS, as Atlassian applications use identity stores in this fashion.

Depending on the IaaS vendor you choose, there are different options available for handling IDM. There are also SAML add-ons available in the Atlassian Marketplace for connecting Atlassian products to SAML providers if SAML is an acceptable solution for your organization.

Evaluating the best option for you

If IaaS is an option that you’re seriously considering for your Atlassian environment, we have more resources to help. First, if you haven’t considered Atlassian’s Data Center product line for your mission-critical instances of Jira Software, Jira Service Desk Confluence, Bitbucket, or Crowd 3 steps to convince your team it’s time to get Data Center may help.

Once you know whether Data Center is the right option for you, check out the available templates for evaluating Atlassian’s products on AWS and Azure. We recommend using these templates for trial only, and that you customize them for a production deployment.

With that, try out Data Center on either AWS or Azure for 30 days for free.

Try out Data Center

 

SWOT analysis-Word of the Day Daily updates on the latest technology terms | November 8, 2018

SWOT analysis

SWOT analysis (strengths, weaknesses, opportunities and threats analysis) is a brainstorming exercise for helping to identify what internal and external factors will impact the ability of a project, product, place or person to be successful.

As its name states, a SWOT analysis examines four elements:

Strengths: Internal attributes and resources that support a successful outcome.
Weaknesses: Internal attributes and resources that work against a successful outcome.
Opportunities: External factors that the entity can capitalize on or use to its advantage.
Threats: External factors that could jeopardize the entity’s success.

The matrix for conducting a SWOT analysis can be as simple as dividing a square into four quadrants and labeling them S, W, O and T. For example, decision-makers might choose to identify and list specific strengths in the upper-left quadrant, weaknesses in the lower-left quadrant, opportunities in the upper-right quadrant and threats in the lower-right quadrant. To guide participants through the analysis, the facilitator will typically use a series of open-ended questions such as “What do you do better than anyone else?” or “What does your biggest competitor do better?”

Although the snapshot that a SWOT analysis provides can be important for helping stakeholders understand what factors may impact success, the framework does have its limits. For example, if the analysis does not include all relevant factors for all four elements, stakeholders may walk away from the exercise with a skewed perspective. Moreover, because the exercise only captures factors at a particular point in time, the insight gained from a SWOT analysis potentially has a limited shelf life.

Quote of the Day

“Successful completion of the strategic planning process means the newly implemented strategies must be periodically evaluated and updated as needed.” – Paul Kirvan

Expert’s View: Remedy Security Gaps with Automation in Your Multi-Vendor Environment

Expert’s View:
Remedy Security Gaps with Automation in Your Multi-Vendor Environment
November 12th 2018
1:30 pm (South Africa Time GMT +2)
Securing today’s enterprises that are comprised of a variety of security vendor tools and systems is extremely cumbersome and sometimes seemingly impossible, and manual intervention can often cause delays in response to threats.

Learn how to integrate and automate this process with connectors to close security gaps, greatly reduce resource needs and much more. We anticipate you may just walk away with a bit more pep in your step!

REGISTER

Virtual capacity planning step one: Evaluate what you have

Taking stock of the VMs in your environment today will lay the foundation for an effective capacity planning strategy for virtual environments.

Once management of VMs and requests are under control, the focus can shift to capacity planning. IT planners need to know where to expect growth. Will it be storage? Or is it compute this year?Virtualization capacity planning, however, is much more than simply tracking growth. The first — and often-overlooked step — is validating what you have. This should be done before taking that next step in planning for the future.

In many organizations, both physical and virtual servers fall under some type of enduring naming convention that probably hasn’t been refreshed to keep up with today’s growing virtual environment. This can generate confusion over a VM’s name, ownership and function.

Fortunately, you have options. The use of the description field will help; if you’re using VMware, you also have the ability to tag a VM. The description and attributes can be placed on a VM during or after its creation to help generate a searchable list — depending on which tags are selected. For example, you can tag certain VMs as “Citrix” or “Accounting” to signify a group or application.

Another key piece of information to add is the month and year of the VM’s creation. This allows the virtual administrator to run reports on which VMs are in use by a specific department or application. It helps determine usage for possible chargeback, and it can even uncover abandoned or excessive VMs from an old project or deployment.

Identifying and tagging machines from the start is somewhat easy. In an established environment, tagging existing VMs is not a quick or simple task. It is essential, though, in figuring out what you have.

Knowing which VMs belong to which groups allows an administrator to find out if those VMs are needed and if the resource allocation is appropriate.

Admins should look at the different ownership groups to identify trends in the requests. This will help establish a baseline for when and how many resources a particular group requests. You might find, for instance, that accounting added four new production servers in each of the past three years. Establishing trends of who is using what and how many permanent VMs are being added, along with what amount of temporary growth you need, helps an organization predict its needs.

Being able to look at VM performance in groups will also help to establish baselines for performance and capacity. Groups that are low resource users can be scaled back to a pre-determined VM resource level that is a more appropriate fit. VMs that are constrained on resources can be expanded as well, and VMs that show no activity can be investigated to see if they are still needed.

Seeing what you had before and what you have today will help you prepare for what you’ll need tomorrow. While no tool or technique can fully predict the future, you can get a pretty clear idea by accurately knowing what you have had.

Remember that virtualization management and capacity planning isn’t about finding the best tool that can provide insight into an environment. It’s about understanding what you have in your environment. There is not a quick or easy shortcut to that finish line, but the effort pays off.

This was last published in February 2015

Do high cores per CPU work for virtual servers?

Deciding how many cores per CPU to have in your virtual servers can be tricky, but if you take these factors into consideration, you’re bound to choose the right number.

continue reading…

The seemingly simple issue of deciding how many cores to have in each CPU in your virtual servers has a number of complex dimensions. First, CPUs tend to be underutilized in virtual environments. Secondly, memory size and speed have a bigger impact than CPU performance. And finally, virtual servers tend to be I/O-bound.

Low cores per CPU makes for the best server farms, and so, on the surface, small might seem beautiful. The problem is the other dimensions of server design. These all have economic implications, which, in the end, might be determining factors in choosing a configuration.

Memory options

Dynamic RAM (DRAM) is expensive, and the price of the latest, densest dual in-line memory modules (DIMMs) tends to have a high premium over the mainstream DIMM. Cheaper but more plentiful DIMMs might be a better option, but we now have the option for Optane or NAND non-volatile dual in-line memory modules (NVDIMMs) that give an effective memory expansion into the terabyte range.

Using NVDIMM and cheaper DRAM sticks means that the number of DIMM slots needs to increase. This implies a doubling of memory bandwidth in the system, and together with the capacity boost, we can load more instances onto that server.

Adding fast, non-volatile memory express (NVMe) solid-state drives (SSDs) to the server will dramatically boost I/O rate per instance. This used to be an extremely expensive proposition, but NVMe has entered the consumer market and prices are generally lower for the technology.

NVMe reduces OS overhead for I/O considerably, bringing back extra CPU and memory cycles in the process. Likewise, remote direct memory access (RDMA), which is starting to become ubiquitous in hyper-converged infrastructure (HCI) systems and will be a standard Ethernet feature in a couple of years, reduces overhead and latency between clustered virtual servers.

Taken together, the memory and I/O performance gains let us load up virtual servers with many more instances, likely to the point that the CPUs are more than loaded up. This points to more CPU cores per CPU to keep the server operating in balance.

Instance configurations

At this point in the discussion, it’s worth looking at what an instance actually is. There is, of course, no “right” size. Instances come in a variety of configurations, allowing a matchup to application use cases. They range from 1-vcore to 1-pcore to multiple virtual cores per physical core and can even reach 1 vcore per CPU. Memory and I/O allocations are also independent variables.

Trends in instances are to allow for larger DRAM and more I/O, coupled with a lower vcore ratio — more CPU per instance. If you expect to service this class of instance in the next five years, a bigger server engine, with higher cores per CPU, probably makes sense.

Containers and microservices

This relatively new approach to virtualization looks to supplant hypervisor-based instances. Typically, a server hosting container needs less DRAM than a server running a hypervisor due to memory segment sharing, so a container might see three to five times the instance count. This increase in instance count implies yet more CPU cores.

Typically, a server hosting container needs less DRAM than a server running a hypervisor due to memory segment sharing, so a container might see three to five times the instance count.

If we add in the move to microservices software architecture, where storage and networking service functions are converted to small, containerized modules and applications are also partitioned into microservices elements, the container count per server will jump again, perhaps significantly. Microservices approaches mean more state-swapping in the CPU. More performance and parallelism are needed, so again, more cores per CPU helps keep the balance.

Innovation in server architectures

All of this ignores the rumblings at the leading edge of server architectures. We’ve heard talk for around three years about Hybrid Memory Cube (HMC) modules that bring the CPU and a segment of DRAM into a tightly coupled module. This boosts DRAM speed dramatically, both from the better electrical interface and architecture calls for many parallel channels to the DRAM.

HMC-like platforms are being touted by major vendors, with early versions limiting DRAM size to around 32 GB. This is enough to form a large intermediate cache between the L3 and DRAM, boosting effective memory performance to the point that more cores can be supported. Again, the conclusion is that more cores makes sense.

Economic implications of adding more cores

Generally, fewer, powerful servers are better economically than lots of small ones. They are cheaper to run and can effectively support 25 Gigabit Ethernet and 50 GbE links with RDMA. The underlying infrastructure of power supplies is better amortized, while running costs — power, cooling and admin support — come out cheaper, too.

In all these discussions, though, there is a sweet spot for the component choices. For example, cost per core might actually drop as cores per CPU goes up, but the top three or four CPUs are likely new and will have a 50% to 100% premium. The same applies to memory and SSDs.

If you avoid the leading-edge products, it would seem that higher cores per CPU and generally enriched servers make better sense than small servers for all except — maybe — popcorn-sized web servers.

Looking to the future

The future roadmaps for server architectures are a bit hazier than usual, due to debates about moving to a memory-centric framework, such as Gen-Z. Even so, a core count of up to 32 cores per CPU will be seen in 2018, and we can expect further growth if we expand HMC memory sizes to the terabyte level. Some vendors hint it will probably be a mix of DRAM and NAND in the module.

With high-core-count virtual servers and the HCI architecture, we’ll see the footprint of the server storage farm shrinking dramatically for a given workload.

One caveat: Pay attention to network design and traffic flow. We are still pushing the load limits of LANs and WANs, even with the latest speeds and feeds. Encryption and compression are fast becoming required features for LANs, and this adds yet more CPU load, which means more cores.

Next Steps

Optimize your vCPU resources

Manage vCPU distribution with affinity and anti-affinity features

Overcome vCPU performance problems

This was last published in August 2017