What you need to know about hyperconverged infrastructure

Hyperconvergence or hyperconverged infrastructure (HCI) is a data center architecture that embraces the principles, costs, and benefits of cloud computing. The third-generation in a series of converged technologies to hit the market, HCI consolidates compute, storage, networking, hypervisor, data protection, replication, disaster recovery (DR), global management, and other enterprise functionality onto commodity x86-server hardware to help organizations simplify IT complexity, deliver digital transformation, and drive agility, innovation, and business value.

The promise of these benefits is driving rapid HCI adoption. Sales of hyperconverged systems reached $1.5 billion in Q2 2018, accounting for over 41 percent of the total converged systems market. And analysts expect growth to quicken: Stratistics MRC predicts the global HCI market will grow at a CAGR of 42 percent from 2016 to 2023, reaching $17 billion in revenue by 2023.

This post will help you decide if hyperconvergence is a fit for your company.

What can hyperconvergence do?

Today’s business leaders struggle to manage data centers with disparate legacy systems, distributed data, shrinking IT staff, constant demand for new services, and increased pressure to move to the cloud. HCI can help your organization tackle these challenges and reduce risk, boost productivity, and enhance flexibility, scalability, and availability.

Hyperconverged infrastructure is an easy way to modernize technology. Because of its non-disruptive implementation, you can phase in new architectures and phase out the old elements as needed with LEGO-like efficiency. HCI also delivers a central point of administration, allowing you to manage hypervisor, storage, backup, replication, and DR tasks from a single pane of glass.

Because an HCI vendor will become your single point of contact for solution acquisition, maintenance, renewals, and support, you’ll be able to eliminate complex hardware compatibility requirements, convoluted relationship management, and unnecessary finger pointing.

HCI’s software and virtual machine (VM) focus, shared resources, and easy automation will allow you to achieve a lower total cost of ownership (TCO) compared to traditional infrastructure. You’ll spend less on procurement, operating, and upgrade costs over the solution’s lifetime, yielding direct and indirect benefits to your business, such as the minimization of training and integration services, reduction of downtime from outages, and the elimination of rip-and-replace upgrades.

2 things to keep in mind before adopting HCI

The robustness, scalability, and flexibility across HCI vendor platforms vary. You’ll want to choose an HCI solution that will allow you to satisfy new business requirements and meet growing workload demands without being wasteful. For example, if you only need more storage capacity or performance, you’ll want the ability to add it without having to grow your compute resources, which typically requires additional application licensing fees.

HCI has the potential to propel your business along its hybrid cloud journey, but you’ll want to choose a hyperconverged solution that eliminates vendor lock-in and delivers seamless workload and operational mobility. You can do this by avoiding proprietary systems and verifying the HCI solution’s automation, monitoring, orchestration, management, and other ecosystem tools work flawlessly across boundaries.

3 HCI architectural models to choose from

Customers adopting HCI can select from three architectural models: VM-based storage, hypervisor-based storage, or independent storage. Here’s how they differ:

  • The VM-based storage model installs the hypervisor on bare metal and places the storage in a VM. These HCI solutions achieve the fastest time to market and are offered at the lowest entry point. They offer platform compatibility and support for multiple hypervisors. Management tools like graphical user interfaces (GUIs), command line interfaces (CLIs), and application program interfaces (APIs) help differentiate the solutions within this category.
  • In the hypervisor-based storage model, the hypervisor is installed on bare metal and the storage is integrated into the kernel of the hypervisor. The intellectual property (IP) for these HCI solutions lies with their deep hypervisor integration, their advanced feature set, and integration capabilities with other software-defined components. Solutions in this category can reduce risk and deliver bare metal performance because the infrastructure is not exposed as a VM, which utilizes many server resources.
  • The independent storage model installs both the hypervisor and storage operating system (OS) on different bare metal node types. These HCI solutions provide the greatest flexibility and scale, allowing you to avoid management, hypervisor, and use case lock-in. If you adopt solutions in this category, you will gain access to enterprise SAN capabilities, achieve predictable performance for pools of compute and storage capacity, and reduce per-core licensing fees.

Need help evaluating HCI solutions?

Organizations of all sizes are relying on hyperconvergence for their mission-critical business applications like Microsoft SQL, Oracle, and SAP; test and development environments; cloud-native workloads that leverage containers and microservices; edge computing for industrial Internet of Things (IoT), medical equipment systems, ships, and oil rigs; and big data analytics processing.

But determining whether HCI is the right technology for your business can be difficult, and selecting the best HCI solution for your company’s unique requirements can be even more challenging. To avoid making an embarrassing, costly, or detrimental mistake, start by conducting a proper needs assessment.

How this health system upgraded its storage and backups while saving money

Change isn’t always a good thing. However, if you’ve been using the same legacy storage and backup and recovery platforms for the better part of 20 years, change is exactly what you need. The key question is, a “change” to what?

This was the challenge facing a Louisiana health system’s new IT director, who was tasked with finding a solution to an organizational problem that had gotten out of hand under previous leadership. To put it simply, they were running out of data center space, backing up data took days, and their maintenance renewal fees were growing unwieldy.

The organization turned to SHI, which had become a trusted advisor over the years, to help solve this dilemma.

Examining the extent of the problem: Financial, performance, and spatial

When the IT director began to take a closer look at what had become an extensive operational expense, he saw that the cost of maintaining the company’s legacy storage and backup and recovery features went beyond financial.

Updates interrupted production, and even brought down some subsystems, hamstringing the organization’s virtual desktop environment.

These performance issues limited nurses’ and doctors’ ability to provide patient care. They had difficulty getting into the EMR system after storage updates, slowing down access to patient information.

On the backup and recovery side, there were incidents where the company would go to restore something and find that, for one reason or another, it hadn’t been backed up. Sometimes, it would take two to three days to back up the entire system, immediately making those backups out of date.

The organization was also running out of space. The number of racks required to house its storage solutions was creeping up on the physical limitations of the data center. The sheer footprint, along with how many systems the organization had to deploy throughout the environment in multiple centers, was no longer feasible.

Saving space, speeding backups

Tired of dealing with an overly complex and expensive environment, the company turned to SHI for advice on what other solutions the market had to offer.

The organization had the idea to move to an all flash environment and wanted to know if that was affordable. However, given the company’s previous history, SHI wanted to go beyond outfitting the company with a solution that would just take care of performance problems.

SHI’s goal was to make sure the organization wasn’t constantly dealing with maintenance costs or needing to continuously buy more storage. It was time to get out of the traditional SAN model.

SHI helped the company settle on Pure Storage. This would give the organization predictable maintenance, new storage controllers every three years, and the added benefit of Pure Storage’s “guarantee” model, where if you purchase the correct size in the beginning for a certain set of workflows, Pure guarantees a certain amount of compression and storage capacity for that given workload.

In terms of backup and recovery, the company was looking for both an enterprise solution and the ability to store data within the platform while hosting additional secondary workloads. SHI recognized that Cohesity, a multipurpose solution that does a great job hosting file services on the same platform as backup and recovery, would solve both needs. The organization agreed.

Navigating the transition

SHI didn’t just offer solutions for the company, it provided support throughout the entire transition process.

The best part is that the maintenance costs the organization saved with the switch more than covered the investment in the new Pure Storage and Cohesity solutions. With both operational and capital expenditures down the organization is poised to save money over the next several years.

Cisco HyperFlex 3.0: 6 stand-out features you need to know about

Enterprises know the benefits of hyperconverged infrastructure in cutting through the complexity of their massive environments, including proving grounds and sandboxes for development and testing. But even smaller organizations that don’t have developers, or lack someone with storage experience, can benefit.

As IT gravitates closer to hybrid, multi-cloud systems, HyperFlex 3.0 has expanded capabilities that bring that future within reach for more organizations.

Here are six of the most stand-out new features of HyperFlex 3.0:


  1. Microsoft Hyper-V support. For all the Microsoft shops out there who want virtualization and a hyperconverged environment, HyperFlex has added hypervisor support for Microsoft Hyper-V, so you can now get the same flexibility that VMware users have on your Microsoft systems.
  2. Container support. While they’re not yet as big as hypervisors, containers are the Next Big Thing, and completely blow away the idea of a hypervisor running virtual machines with separate installations of an operating system. By virtualizing the OS to the point that applications in development think they’re running on their own personal piece of hardware, you can develop and deploy applications efficiently while reducing the number of OSes that have to be installed, configured, and patched.
  3. Multi-cloud support. Using Cisco CloudCenter, you can build infrastructure for your applications once, deploy the applications locally on HyperFlex while building, testing, and getting them up to speed, and then re-deploy them in your live environment. If you need more horsepower for a seasonal surge – for example, a retailer processing payments during the holiday season – you can deploy the same app to AWS, Azure, and other public clouds. You can also do all your development in the cloud but then bring the application in house when you have live data to keep it more secure. It gives you the flexibility to design an app once and then deploy wherever is best for your organization.
  4. Enhanced scale. You can create a system as small as three nodes or with as many as 64. While enterprises with larger environments can take advantage of all that storage and performance, even organizations with smaller environments can deploy just a few nodes with the potential to scale storage incrementally by adding capacity drives. They can also scale compute power by adding lower-cost, compute-only nodes. And because you can stretch clusters across multiple sites, HyperFlex 3.0’s disaster recovery features are more viable for business critical applications.
  5. Validated designs and apps. If you have questions about whether certain applications will work on HyperFlex 3.0, Cisco has already done a lot of the validating for you, putting popular applications for Oracle, SAP, Microsoft, and Splunk through the gauntlet. You can see the validated designs and solution guides on the Cisco website.
  6. Built from the ground up. HyperFlex 3.0 was built to be a hyperconverged, software-defined storage solution with a file system that is distributed and accessed across all nodes as a group, rather than localized, for more consistent performance. By waiting for other hyperconverged players to bring their solutions to market first, Cisco was able to build HyperFlex 3.0 from the ground up with the best aspects of end-to-end architecture.

Whether you’re a corporate enterprise, a school district, or a government agency, HyperFlex 3.0 has introduced new ways to simplify your data center infrastructure, offering flexibility without the high capital cost of equipment.

HyperFlex 3.0 will be available for order next week. For more information on the updates or to see whether it might be a good fit for your organization, contact your account executive.

2021 Outlook: Tackling Cloud Transformation Choices

Weighing complex decisions on cloud adoption and how to make the most of it is a discussion more CIOs will face this new year — and in the future.

An ever-growing number of enterprises plan to or are already exploring the seemingly boundless potential offered by the cloud. Yet this segment of digital transformation remains something of an open frontier waiting to be settled.

There is no question that cloud adoption continues to build momentum as organizations consider how they might best benefit from migrating part or all their compute needs — there is still plenty of room for expansion on this front. From the perspective of overall technology spending, current levels of cloud investment can be surprisingly small, but growing. Andy Jassy, CEO of Amazon Web Services, said spending on cloud on a global scale represented just 4% of the overall IT market. Further, surveys by Gartner show only 10% of IT budgets at midsize enterprises is dedicated to cloud.

Image: Tom Wang - stock.adobe.com


Adoption is expected to continue to grow, but it may take significant time before most of the world goes hybrid cloud or fully cloud native. For example, Gartner predicts 60% of workloads at midsize enterprises will remain on-prem through 2023.

A sense of inevitability surrounds the cloud in some ways with some organizations altering or accelerating their strategies in response to the COVID-19 pandemic and the changes that may linger long after. Enterprises learned the cloud can present ways to adapt to the unexpected, such as scaling up resources to accommodate surges in demand.

Reaping the benefits of the cloud does require organizations to not only plan for but also follow through on their transformation strategies, with culture changes among IT teams and the C-suite. Jassy told viewers of the AWS re:Invent virtual conference that organizations must build up muscle to accelerate their speed of change when embracing the cloud. “Speed is not preordained. Speed is a choice,” he said. “You’ve got to set up a culture that has urgency and wants to experiment. You can’t flip a switch and suddenly get speed.”

The stories that follow offer a snapshot of InformationWeek’s coverage of cloud and decisions that CIOs and other IT leaders face as they navigate adoption and migration strategies. This guide represents just a small portion of the wealth of information available through InformationWeek on this and other transformation topics.

Looking at the Cloud in 2021: Growth and Changes

CIOs will have a host of cloud options to choose from in 2021 as the cloud business evolves, according to a new Forrester Research report.

Ways to Help CIOs and CFOs Calculate Cloud Costs and ROI

More tools are available to give enterprise leadership greater clarity on the expenses and opportunities that come from cloud migration.

What Must Enterprises Learn to Increase ROI from the Cloud?

Survey by Accenture shows some organizations have yet to realize the most value from their cloud strategies.

10 Ways to Transition Traditional IT Talent to Cloud Talent

While many IT professionals love learning new things, IT leaders and their organizations must do several things to facilitate a smooth transition to cloud.

Where Cloud Spending Might Grow in 2021 and Post-Pandemic

A study by Gartner points to organizations continuing and evolving IT plans that ramped up fast to move to the cloud in response to COVID-19.

Why Distributed Cloud Is in Your Future

Most companies have a hybrid cloud strategy but IT departments are struggling with it. Distributed cloud addresses some of the issues.

Aegis Sciences CIO on Scaling IT to Meet COVID Testing Demand

Toxicology lab finds flexible way to scale up its resources that also cut costs while increasing the pace of testing.

Faced with a need to ramp up its testing capacity fast amid the pandemic, Aegis Sciences crafted a plan to build up its IT infrastructure and platforms from within as well as by tapping third-party resources. The toxicology lab already provided services monitoring for drug interactions and other forensic lab sciences. The immediacy and scope of testing for COVID-19 came not long after Aegis began its own recovery from a devastating, severe weather event in 2020 that affected its community in Nashville, Tennessee.

CIO Tim Ryan spoke with InformationWeek about the strategy Aegis put into action to recover from the disruption of an actual tornado and then adapt its IT infrastructure to scale up from 3,500 daily tests for COVID to 60,000 tests per day to fulfill a grant from the National Institutes of Health.

Image: zinkevych - stock.Adobe.com

What was the IT strategy for Aegis Sciences before COVID-19 struck?

Pre-pandemic, we have two labs. Our main lab, which is our toxicology lab, and across the parking lot is our administrative building, which housed at the time a very small biopharma lab. Our tox lab generally would get anywhere between 3,800 to 4,500 samples day. That was our primary business. Our biopharma lab did a very small amount of business and really wasn’t a factor in our operations.

We had a tornado in Nashville that happened last March, which caused a lot of disruption to our business. As we were coming out of the tornado situation, the pandemic hit full throttle and national lockdowns started to occur. We decided in March to see if we could get into the COVID testing.

On the tox side of the business, we got hit significantly by the lockdowns. Doctors’ offices weren’t open. Behavioral health clinics shut down. It had a significant impact.

The first thing we had to do was see if we could get the COVID test up and running for 3,500 specimens. We had to be able to report to each states’ department of health. Whether you’re doing 3,500 or 100,000 tests, you still have to have connectivity to those states. We needed to determine how we were going to connect to the states from a reporting perspective. You had fax, CSV files, and HL7 (Health Level 7) electronic resulting files. That was really dependent on how the states could handle it. Unfortunately, there is no one standard for all of the states. That creates complexities.

Another thing we had to deal with was how to get results out to patients. Being a toxicology lab, there’s not a big need to have a patient portal. We had to determine if we could build our own patient portal or was that something we needed to outsource. We ended up deciding that building the portal ourselves would be quicker and less expensive.

What considerations were made in terms of internal resources and staff with the skills to make this work?

We had a third party assigned as well. By the time we had a contract with a vendor to build it for us, we were well on way with our proof of concept and then were quickly able to complete the internal patient portal where we no longer needed third-party assistance.

What further steps did you have to take?

We have a third-party laboratory information management system (LIMS). We also needed to determine internal capacity on that system. When the 3,500-capacity limit was discussed, we were confident we could absorb that. After we were successful, there was a big need in the country. As we were increasing the business side, we also needed to look at the IT side. As we scaled up, we were able to take advantage of what Pure Storage offers. One of the benefits of going with Pure is their Evergreen capabilities.

Our LIMS is an Oracle-based system. Previously we had storage that was dedicated just to LIMS based on our Oracle licensing. Therefore, that would reduce our Oracle costs because we couldn’t have a shared infrastructure because you’d have to pay the Oracle costs for that shared infrastructure. We worked with our LIMS vendor, ended up getting an application-based Oracle license where we could use Oracle as long as it was just for that application versus an enterprise-based license.

That enabled us to move our LIMS system to have a Pure backend and we didn’t have to worry about our licensing costs going forward if we continued to scale more in the future. That gave us a lot of flexibility. We saved multiple millions of dollars using that model.

We also ended up getting funding, $6.6 million, from NIH’s RADx (Rapid Acceleration of Diagnostics) program. That was to build out to 60,000 tests a day initially.

We had already instituted Microsoft Teams prior to the pandemic and got a rehearsal with the tornado as people were working from home. We ended up going enterprise-wide with our Teams application, so we didn’t have to bring IT resources into the building. We fully deployed to 60,000 by the end of September.

As part of that $6.6 million grant, there was a significant portion that was IT related. That helped us fund buying additional Pure capabilities as well as other internal capabilities. We had some technical debt that would prevent us from moving forward. We were able to mitigate that to help us grow our capacity. NIH was very happy with the results and we’ve seen since agreed to go to 110,000 tests per day capacity. We received another $6 million in NIH funding to help us grow not only the IT systems but also our laboratory.

Why the Financial Services Industry is Embracing the Cloud

Financial services firms, such as banks and insurance providers, are rapidly heading into the cloud, lured by the promise of multiple monetary, innovation and performance benefits.

Capital One surprised business and technology analysts late last year by announcing that it has shuttered its physical data centers and moved all of its operations into an Amazon Web Services (AWS) public cloud. In doing so, Capital One became the first major bank to completely transition to an all-cloud IT environment. It’s highly unlikely that it will be the last to do so.

Promises of lower costs and enhanced scalability are drawing a growing number of financial services firms into the cloud. “The ability to spin up resources elastically and run it at scale allows financial services firms to build better digital offerings and customer engagement, as well as keep up with the pace of an increasing customer base,” said Kelley Mak, a principal at venture capital firm Work-Bench. “Other benefits of cloud adoption include productivity, attracting talent, and security,” he noted.

Image: Sergiy Timashov - stock.adobe.com

After several years of sitting on the sidelines, even the most conservative financial services organizations are finally beginning to recognize the cloud’s benefits. “They are more accepting of the cloud, have seen its successful deployment in the financial services industry, and are becoming more adept at leveraging it as well as managing risk,” observed Jason Malo, a Gartner research director. Banks are not stodgy about technology, but they do need to be diligent and shown the value of a proven technology, he added.

Financial institutions rely on a tremendous, almost unimaginable amount of data. “Historically, that required on-premises data centers that were unfeasible for smaller community banks, lenders, or credit unions to compete with,” said Jim Pendergast, a senior vice president at business financing lender altLINE. “This limited competition and services [to] only the biggest banking brands with the resources to support [an] on-premises data infrastructure.” The cloud promises smaller firms a far more level playing field. “Smaller financial institutions can rival the big dogs in terms of service offerings and data security, plus avoid those astronomical old data center, equipment, and IT costs,” he noted.

Use cases

Financial services firms are beginning to tap into the cloud to explore innovative new business services and practices. Connecting with third-party apps, for instance, promises to open the door to new customers and additional revenue streams. Pendergast pointed to ridesharing services, which rely on connecting to passengers’ debit cards or peer-to-peer payment apps to complete transactions. “Banks need the cloud to offer these connections and allow users to use fintech apps,” he stated.

Complex technologies, such as data warehouses and data marketplaces, are moving to the cloud, observed Jay Nair, senior vice president of financial services at business and IT consulting firm InfoSys. “This is enabling organizations to create a consistent data delivery strategy in a short time frame,” he explained. “In the core domains, like mortgage, clients are increasingly relying on the cloud for document management and AI solutions for mortgage origination as well as servicing.”

Pendergast noted that the cloud also offers firms a cost-effective way to test new apps and online services, as well as lowering costs by eliminating the need for expensive on-premises equipment that becomes outdated after only a few years. “On the flip side, you’ll see banks being able to enter and compete in new markets, boosting their revenue and scaling up at rates previously unimaginable,” he said.

Financial services firms need fast answers to key business questions. “Before cloud computing, a firm might [run] a nightly job to answer daily questions, [yet] the answers may not be ready for the next day’s start-of-business,” observed Ed Fine, a consulting data scientist at tech training firm DevelopIntelligence. “This scenario still happens more often than you might guess, and the delay in getting the needed information can have financial ramifications,” he added.

Where Cloud Adoption is Still Needed

COVID-19 is a wake-up call for leaders to embrace cloud technology. However, some industries that would benefit from cloud adoption still resist.

In the past 20 years, cloud computing has transformed into a widely used, enterprise-driving piece of technology. Today, more than 91% of all businesses use either a public, private, or hybrid cloud solution — and they continue to embrace cloud migration to improve elasticity, efficiency, and innovation.

Most industries recognize cloud as a necessary tool for managing workforces — especially in a COVID-19 environment, when everything is remote or digital. Despite cloud’s value, two out of three organizations are failing to capture the full value that cloud can bring to their businesses, and one out of four organizations experience unexpected complications during the cloud migration process, according to Accenture’s 2020 cloud survey.


Image: Bits and Splits - stock.adobe.com


As businesses continue to integrate cloud into their workflow, these new challenges lead to growing skepticism about the migration process. In healthcare — an industry known for slow adoption of emerging technologies — cloud computing is no different. Below are the primary challenges that healthcare businesses face with cloud adoption:

  • Infrastructure as a bottleneck: The healthcare industry has groomed a workforce of “server huggers” –building legacy infrastructure meant to remain unchanged for a generation. But the technology we use today wouldn’t even be recognizable two decades ago. As a result, outdated infrastructures can cause more harm than good by interrupting workflow and negatively impacting patient safety.
  • Lack of cloud skills within the organization: Healthcare organizations simply don’t have the talent to use cloud as a tool. The industry is facing a growing shortage of qualified health IT staff frequently due to health systems preferring not to hire professionals who don’t have extensive health experience.
  • Complexity of business and operational change: Healthcare leaders frequently view new technology as risky, and improvements are often incremental at best. The industry has become plagued with the mindset “if it’s not broken, don’t fix it.”

While COVID-19 upended the healthcare industry, it also provided businesses with a once-in-a-lifetime opportunity to reimagine operations. Adoption is not easy, and it requires a top-down engagement to push the whole organization forward. To successfully build cloud into business operations, businesses will need to do three things:

  1. Adopt new business models. Businesses need to launch aggressive top-down goals that are supported by senior executives.
  2. Give CIOs a seat at the table. Empower CIOs to make critical business decisions that drive company processes by giving them a seat at the table.
  3. Develop new skills. Arm workforces with the skills needed to integrate new cloud technologies and processes.

By harnessing the power of a cloud, businesses are better positioned to reach their goals in this challenging environment, build stronger technology infrastructure and create greater resilience for what may come in the future.

Cloud has proven its centrality to resilient, sustainable enterprise operations and future competitive advantage. If you’re not substantially on the cloud, you can’t hope to unlock the capabilities a modern organization requires — greater flexibility, more agility and new opportunities for innovation to help you disrupt your industry. Enterprises that continue to delay a shift to cloud at scale aren’t just incurring an opportunity cost, they’re risking their very survival.

What Happens If a Cloud Provider Shuts You Out?

When AWS bounced Parler from its servers, it raised questions about continuity of service other companies may need to consider.

Migrating apps and other core resources to the cloud stands at the heart of many transformation strategies, but if the big three service providers pull the plug on access, where does that leave enterprises?

Parler’s recent banishment from AWS and re-emergence on Epik, a Russian domain registrar, is a very specific case but it does raise issues other organizations might need to consider — especially in a market dominated by three hyperscale cloud providers. If one of those cloud providers permanently terminates services and other major providers in the United States refuse to take a customer on, what can companies do? Experts from CloudCheckr and Aiven offer some perspective on possible steps organizations might take under such circumstances.


Image: Rawpixel.com - stock.Adobe.com

Jeff Valentine, CTO of CloudCheckr, says while it is unlikely for most companies to face the exact situation as Parler, there are broader implications that organizations might want to consider. “There’s probably a really small percentage of people that will ever go through this,” he says. “But it should be thought about way before this happens.”

There are other reasons, such as sudden outages or the shutdown of a cloud provider, for organizations to create plans to salvage their code and get back online quickly, Valentine says.

Heikki Nousiainen, CTO at Aiven, also says the threat of getting cut off by all three major cloud providers is very low for most other businesses — yet companies may want to maintain the ability to move code around for disaster recovery needs. “They are rare, but we sometimes see these big outages touch Google, AWS, or Azure in one or more regions,” he says. Companies with very time-sensitive online business needs, for example, may want to maintain the ability to roll over to a backup elsehwere, Nousiainen says.

He recommends exploring true multi-cloud options where companies can select providers freely without being locked in, and also going with open source technology because that lets the same set of services run in different clouds. Some of these options can come at a bit of premium, though Nousiainen says the overall benefits may be worth it. “There are costs associated but typically when that investment goes into preparing infrastructure as a code it also helps for many other problems such as disaster recovery.”

The design of an organization’s applications, Valentine says, can play a significant part in how companies proceed if cloud services are abruptly lost. “Most folks have applications that started on premise and then moved to the cloud,” he says. “That lift and shift strategy . . . has some drawbacks.”

It could be difficult to move the application again, Valentine says, because accommodations were already made to migrate to the cloud in the first place. If companies take the time to rearchitect applications, perhaps with a container model with Kubernetes, it can be more portable, he says.

There can be other pitfalls moving to the cloud, Valentine says, where organizations get locked in with a vendor because of the technology choices they made. To avoid such lock-in worries, organizations might choose technology such as open source that can work with many different providers to create a vendor-neutral platform, he says. “There’s a cost to that. Everything takes twice as long to build so it costs twice as much, so is it worth it?”

If a company were to face the imminent loss of its cloud services, Valentine says the only option is to take all of the code and implement any overdue changes in the moment. “There’s no way around it,” he says.

Despite the possiblity of losing access to cloud resources due to calamity or the cessation of a service contract, Valentine doubts organizations would suddenly revert to their old on-prem ways as a long-term alternative to operating in the cloud. “There’s no case for moving cloud to premise permanently,” he says. “I can’t imagine companies reversing course. Careers have been built on this digital transformation.”

Even so, Valentine says 20% of all apps are expected to always reside on premise though digital transformation is still in its early days as an industry trend. “We’re probably 10% of the way through this journey,” he says.

Potential changes that may make the cloud landscape more fluid could come from Google’s Anthos hybrid cloud platform, he says. AWS is also talking up run-anywhere technologies in its cloud, on the edge, or another provider’s cloud. “You end up having cloud vendors themselves acknowledging that cloud apps need to be more portable than they have been in the past,” Valentine says.

Other providers outside of AWS, the Google Cloud Platform, and Microsoft Azure are also coming into their own, he says, at layers higher than infrastructure. “Snowflake is the best example,” Valentine says, referring to the cloud-based data platform. “Instead of coding your own lake on infrastructure as a service using AWS, Azure, or GCP, you can buy the platform from Snowflake.”

If a company does find itself in a situation where it knows cloud services will be lost imminently, Valentine says they should consider these steps:

  • Start a database backup of the relational database systems. “Those are likely to take time and you need a recent backup to restore somewhere else,” he says.
  • Download object files to a local system. “You gotta get it somewhere, whether it’s temporarily into Dropbox, you just gotta get it somewhere else,” Valentine says.
  • Assess the application code. If the organization has a continuous integration, continuous deployment pipeline of source code it owns but is in the cloud, this must be captured to be redeployed elsewhere, he says.
  • Look at ways to change DNS settings. “It’s going to go into a blackhole when they shut you off,” Valentine says. If the organization can transfer to another registrar, he says, at least the company can point its users to an alternate landing page for the interim. “You can eventually redirect to somewhere else but you have to own your domain to do that, so you have to change registrars.”