What to Know When Migrating DevOps to Microservice Architectures

If your DevOps team is planning to migrate from traditional server architectures to microservices, there are distinct changes that IT leaders must keep in mind.

If your DevOps team is planning to migrate from traditional server architectures to microservices, there are distinct changes that IT leaders must keep in mind. Both tools and technical capabilities will have to be adjusted to shift a DevOps team toward one that can successfully deploy and manage microservices in an efficient manner. Let’s look at why some IT leaders are behind the curve when it comes to the DevOps microservices movement and what steps must be taken to transform the team to microservice technologies in a timely and cost-effective manner.


Image: profit_image - stock.adobe.com


Lack of understanding at the management level

Technologies and technology skills are constantly changing. Thus, CIOs and IT managers are well aware that tools and skillsets must evolve to take advantage in the latest evolutions in enterprise IT. However, in some cases the evolutionary changes for certain technologies aren’t nearly as clear cut as others. One of the areas of IT that CIOs regularly struggle with is DevOps. Other than understanding that DevOps is transforming the processes and methods that enterprises use to rapidly develop, test, deploy and manage applications using agile processes, few additional details are often fully grasped by IT leadership. Thus, when discussions on how to best move DevOps from developing software using traditional server-based architectures to microservices, many are left wondering how this shift is beneficial to the overall health of the organization.

Reasons why microservices are appealing to DevOps teams

An important question that must be understood by IT leaders is the precise reasons why software development using a microservice architecture has become so appealing to DevOps teams. Without getting overly technical, microservice architectures eliminate much of the inflexible, unnecessary and time-consuming tasks and technologies that are common when developing software on monolithic development architectures of old. Additionally, fully self-contained applications share compute resources and must be updated, modified and scaled as a singular program. This is a process that adds significant risk whenever changes to the software need to be made.

Microservices, on the other hand, are created with modularity and scalability in mind. In contrast to traditional development methods using monolithic architectures, individual programmatic tasks are configured and executed separately. Saying this a different way, a traditionally developed monolith application can be broken into multiple services that work together to accomplish the same functionality. A single microservice can be modified, expanded, contracted or moved without the chance of negatively impacting any other microservices. Thus, a microservices architecture allows developers the ability to build applications faster with operations technicians being able to manage and scale apps with far less effort.

New microservice platforms and tools

As expected with any technology advancements, a DevOps shift to microservice architectures will require several new platforms and tools. The most obvious difference between microservices and traditional software applications lies in the host operating system the software is executed from. While Microsoft Windows Server and full-blown Linux operating systems can execute microservices running within containers, most opt for a stripped-down OS that eliminates many of the unnecessary OS features/functionality of traditional server operating systems. Instead, a smaller, purpose-built OS is used. This OS provides the minimum functionality required to both secure and execute microservices.

Running on the host OS is a separate platform that is responsible for the creation of containers. A container platform can be loosely compared to a traditional hypervisor virtualization platform that creates multiple virtual machines on a single piece of hardware. Containers differ in the fact that a hypervisor virtualizes a complete virtual server — hardware and all — while a container only virtualizes and segregates workloads virtualized within a single host OS. Because of this, containers offer several performance efficiency gains compared to hypervisors. This is extremely important for microservices as each service becomes encapsulated within their own individual container.

Once microservice workloads are deployed into individual containers, a container orchestration platform is recommended so that each individual service can be called and executed when needed. Container orchestration platforms can also manage compute and storage resources for each containerized workload. Orchestration tools automate many of these underlying processes, which keep applications running smoothly for end users.

Finally, microservices and their corresponding container and container orchestration tools require new ways to monitor the overall health of the environment. Just as server virtualization architectures required a new set of tools to monitor each hypervisor at a granular level, monitoring microservices and containers requires new tools as well. Because a single monolith application can be broken down into hundreds or thousands of individual container workloads, monitoring the health of each container can seem like a challenge. Luckily, there are dozens of commercial and open-source container monitoring tools available. DevOps teams simply need to select and implement the one that works best for their infrastructure environment.

Skills that must be added

New technologies are only useful if your IT staff understands how to integrate them into existing production environments. Migrating from traditional DevOps software development architectures to microservices requires some major changes to what tools and platforms are used. Microservices will likely serve as the foundation for modern enterprise applications for the foreseeable future. Therefore, it’s vitally important that both developers and IT operations staff get the training necessary to successfully build and manage host operating systems, container platforms, orchestration/scheduler tools and container performance monitoring tools. Depending on the technologies and tools chosen, training that covers both concepts and tools will go a long way toward a successful DevOps microservices migration.

Cybersecurity, Modernization Top Priorities for Federal CIOs

Cybersecurity, modernization, H1-B visas, COVID-19 data and analytics. These are a few of the items that are on the agenda for the Biden administration.

Now that the Trump Administration is gone and the Biden Administration is leading the executive branch of the US federal government, will IT organizations within the government notice a difference? What changes can they expect? Will those technology changes make a difference to the technology community beyond the federal government?

One thing is certain — you can expect some significant changes. That’s according to Gartner Senior Research Director Michael Brown who published a research note on what’s expected from the new administration and spoke to InformationWeek in an interview.

Image: Vacclav - stock.adobe.com

“The swing to the previous administration and the swing to this one are the two largest inflections I’ve seen in my lifetime,” Brown said, adding that he has served in every administration from Carter to Trump, most recently having served as CIO of US Immigration and Customs Enforcement (ICE) before leaving there for Gartner in Sept. 2019.

The Biden administration is hitting the ground running with several executive orders signed in the first few days. Based on those, the administration’s first 100 days plan, and the proposed COVID-19 stimulus package, Brown said federal government IT can expect shifts in immigration policy, climate change, funding for state and local COVID-19 programs with a particular focus on the impact on minority communities, healthcare policy, and police accountability.

The impact of these shifts will be felt in terms of funding as the dial is turned up or turned down for particular agencies, based on the new administration’s priorities. The IT budget for these agencies will be impacted accordingly, Brown said.

One significant focus not covered by the first 100 day plan but indicated in the proposed stimulus package is a response to something more recent — the SolarWinds hack, which has impacted both government and commercial IT organizations.

In response the new administration is putting a new focus on cybersecurity, adding provisions that cover this area to the COVID-19 stimulus package.

While it needs to go through Congress, the American Rescue Plan from the administration calls for a total of more than $10 billion for cybersecurity and IT modernization efforts, plus some other IT-related areas.

“In addition to the COVID-19 crisis, we also face a crisis when it comes to the nation’s cybersecurity,” a brief of the plan says. “The recent cybersecurity breaches of federal government data systems underscore the importance and urgency of strengthening US cybersecurity capabilities. President-elect Biden is calling on Congress to launch the most ambitious effort ever to modernize and secure federal IT and networks.”

Even if it doesn’t remain in the stimulus package that Congress ultimately passes, the Biden administration’s inclusion of funding for cybersecurity highlights just what a priority this area is for the administration going forward.

The provision in the stimulus package calls for a modernization of federal IT to protect against future cyber attacks. That includes a $9 billion investment to help the US launch major new IT and cybersecurity shared services at the Cyber Security and Information Security Agency (CISA) and the General Services Administration, and complete modernization projects at federal agencies. It calls for a change to the fund’s reimbursement structure to enable more innovative and impactful projects.

Further, it provides for $200 million for additional hiring of hundreds of experts to support the federal CISO and the US Digital Service.

It also calls for investing $300 million to build shared, secure services to drive transformational projects without the need of reimbursement from agencies.

Further, it calls for improving security monitoring and response activities with an additional $690 million for CISA, funds that will also be used to support the piloting of new shared security and cloud services.

“In their stimulus package they are proposing putting $9 billion into that modernization fund, which would be a game changer for how modernization could occur across the federal government,” Brown said.

The funding for these programs is significant, according to Casey Coleman, a former CIO of the US General Services Administration (GSA), who is currently a senior VP for global government solutions at Salesforce. While the Technology Modernization Fund (TMF) has been in place for a few years now, in the past it was funded to $100s of millions of dollars, she said. Funding of $10 billion signals that tech modernization is a priority for the new administration.

How AI is transforming the data center

Data center growth is exploding. This growth is driven by the expansion of cloud providers, health care organizations, and financial service providers. Data-centric companies in retail, social media, and entertainment are harnessing the power of data to transform the customer experience. Smart cities are moving from vision to reality and generating massive amounts of data in the process.

Traditionally, data centers are designed around the confluence of large data sets, cheap electricity, and inexpensive land. This combination is driving about 80 percent of the world’s internet traffic through Ashburn, Virginia, the data center capital of the world. Dallas, New York, and Seattle are other growing data center hubs.

But the traditional data center approach is evolving, and data science and AI are now influencing the design and development of the modern data center. Artificial intelligence (AI) is driving new efficiencies that transform everything from the location of data centers to their silicon architecture in order to realize new applications.

Data center modernization being assisted by AI

Every industry is leveraging data in some respect to gain new insights and advance their field.

Health care and life sciences companies are examining genomics, biometrics, immunotherapies, brain initiatives, and so on to better predict ailments and improve therapies.

The transportation industry is using data to identify where accidents are likely to occur, eliminate rush-hour bottlenecks, and improve the safety of public transportation.

Law enforcement is combing through data to anticipate threats and improve public safety based on past incidents.

Now, more and more data centers are adopting AI as a way to modernize their operations. With AI, data centers can aggregate and analyze data quickly and generate productive outputs, which operators can use to manage density, reduce power consumption, and increase performance. Data centers operators utilize machine learning and AI in new and ingenious ways, ultimately driving efficiencies up and costs down.

Data science and AI build on the virtualized infrastructure that has enabled data center operators to add more workloads on the same physical silicon architecture. Data centers moving to hyperscale improve the density associated with compute, networking, and storage.

Ultimately, using AI, data center operators can optimize architectures for specific workloads. Whole buildings can focus on one workload, whether for health care, genomics, education, or weather. Two concepts are influencing that design.

How data science affects data center design

Data science is the first driver affecting data center design. Since data science is all about collecting larger and larger sets of data for analysis, designers and operators need to account for this when blueprinting data centers.

Data analytics capabilities work to align these large data sets with the appropriate amount of compute, power, and storage. By understanding the data lifecycle, designers and operators gain the ability to store more in data centers. As deployment of IoT devices connected with 5G and advanced networks increases, they will drive data to and from the edge, where it’s analyzed and delivered back to the data center.

Data center management and operations is the second driver. Google’s acquisition of DeepMind is a quintessential example of how AI enhances data center operations and management – after having some fun creating games and apps, Google put DeepMind’s AI to work monitoring servers’ air-conditioning units to prevent overheating.

It predicts how much energy the servers will expend for a specific function, and then tailors the air-conditioning usage to that demand – this made Google’s cooling units 40 percent more effective and slashed total electricity consumption by 15 percent.

That capacity and efficiency are leading to more optimized workloads. The same thing is happening at the level of integrated circuits, where data science helps accelerate performance to drive efficiencies.

Data centers are on the move

The trend of dedicating a whole data center to one workload is growing, but just as influential is the growth of mobile data centers.

One reason for this is that data centers are highly sensitive to the costs of electricity and real estate, and places with cheap land and power are becoming scarce.

Organizations are also investing in mobile data centers connected to 5G networks to improve disaster response times, increase safety measures, and avoid downtime.

Internet of Things (IoT) development is another big factor, driven by the need for edge capability. IoT doesn’t just demand that data centers be compact and mobile, it also requires this of the AI and data science itself. Now, machine learning algorithms and cognitive computing algorithms are compactly designed so they fit into mobile devices.

Companies are aiming to drive down the size and the computational and power requirements of these devices. As data science continues to drive data center growth, focus on mobile centers will only increase.

Where will AI take us?

Some might get hung up on the legal, ethical, and societal implications of AI – after all, data science is examining vast amounts of sometimes sensitive data. It’s always important to remember that the technology isn’t the goal; it’s the enabler.

Data science and AI transforms the data from exhaust we all leave behind into the fuel we use for the next great advancements in fields as diverse as medicine, cybersecurity, transportation, energy, climate, and more.

Weighing Doubts of Transformation in the Face of the Future

Deloitte’s 2021 outlook posits enterprises should double down on digital transformation and reskilling. What about those not ready to commit to change?

In its recently released 2021 technology industry outlook, Deloitte lays out recommendations for organizations to be more competitive in the rapidly evolving landscape — yet there may be holdouts who question taking on such moves.

The three macro suggestions the analysis presents are redoubling digital transformation plans, emphasizing retraining workers for the remote world, and reconsidering how manufacturing gets done. Paul Silverglate, Deloitte’s vice chairman and US technology sector leader, spoke to InformationWeek about what organizations might gain if they follow such recommendations and the challenges they may face if they push back against change.

Image: metamorworks - stock.Adobe.com

Some folks say they do not want to pursue digital transformation or believe their current tech skills are enough regardless of industry trends. What potentially lies ahead for them?

You don’t have to [change], but you will be left behind. Seventy-four percent of CEOs believe that their talent force and organization need to be a digitally transformed organization, yet they feel like only 17% of their talent is capable and ready to do that. That gap is glaring. That’s coming from the tops of organizations and businesses. The first mover advantage has kind of passed already. Now we’re getting into the phase of cloud migration and the concept of everything-as-a-service. Digital transformation is easier to attain. You don’t have to be the first mover or early adopter.

The companies that help you live, work, and play inside your home were pretty resilient during the COVID-19 pandemic. Tech, media, and fitness companies like NordicTrack and Peloton that helped you stay inside your house, they were the ones that needed to transform digitally immediately to deal with the significant increase in demand along with significant supply chain challenges.

Now we are seeing other industries that saw a bit of a pause during COVID — consumer, travel, entertainment, energy — those businesses are seeing or expecting this uptick in the summer travel period, the pent-up demand of Americans. Interest rates are very low, and they haven’t been able to spend [as much] money for the last 12 to 18 months by the time the summer comes around.

Those companies are getting nervous about how they will deal with this increase in demand and to transform digitally and do it quickly through the everything-as-a-service model is what they’re looking forward to.

Now the question becomes how do these companies’ internal workforces get trained up in order to support this change in focusing on things digitally and this increase in demand that’s coming.

Cost drives some of the reluctance to change. Where are we in terms of the costs associated with digital transformation? Is it a manageable cost now? Does it still come at a premium to make it happen? Is the cost becoming more digestible for organizations?

It is becoming more manageable from a cost perspective and from a talent perspective. It is evolving. Hyperscalers, the big cloud companies recognize the benefit of digital transformation and each of them has very significant cash incentives they are willing to go to larger organizations with to help fund their digital transformation when they move their products to a cloud platform.

That is a very tactical, direct source of funding. If you’re willing to sit down with advisors, the hyperscalers, and the application companies you can put together a digital transformation journey that’s doable within your organization. That is happening now. There are companies on the other side of that already and many, many companies looking into that.

How long that funding will last and if that will continue — that will continue as long as it’s a value proposition to the hyperscalers. There is probably an urgency in getting together that mashup with the hyperscalers, an advisor, infrastructure companies like the HP, and application companies to pull that together. As long as it makes money for the hyperscalers, there’ll be money there. Then you’ll need to do it more on your own.

What does the road ahead look like for organizations that think there may be a way for them to just not go through with transformation? What kind of landscape could they face in the years to come?

Talent is fungible but limited. There will be a draw for talent to self-select to companies that are state of the art, not necessarily leading edge, versus companies that move slower. Being able to have people in-house who can help run these programs — it gets more complex when you have to use multiple partners and suppliers. That is a skillset that needs to be built. There’s a scarcity to that talent. If you don’t do this earlier, you may not have the talent to execute on that when the time comes.

The supply chain is getting more complex, meaning these products are complex and that the different things that have to come together to make the product operational come from all over the world. The regulatory environment is increasingly complex so transparency and visibility deep into your supply network, which now will have a lot more partners, really can’t be done with a current, analog environment where you’re doing batch processing on a monthly basis, weekly basis, or end of the day basis.

You really have to have real-time data that is synced to your direct suppliers and other suppliers in your ecosystem so that you can understand if one supplier is not able to support the demand that’s coming, you can toggle to another supplier. That requires real-time data. With the introduction of 5G, particularly in factories and businesses, being able to share information and have real-time data directly from your suppliers and manufacturers is going to become a thing.

Then it will be a process of who has the information quicker to make the order faster and get the allocation earlier. Waiting for your reporting analytics, your demand planning to come at the end of a week or month, you are already going to miss the boat. With the proliferation of 5G, that is where that is headed. Real-time data inside the factory that can be shared in a secure and private way with their ecosystem.

What seems to be the popular route to addressing the talent question that organizations face? Hiring externally for the skills they want? Reskilling in-house? A combination of both?

If you can do this internally with your talent, it’s obviously a stickier and better, cheaper route. It’s going to be a combination of all of those. There are hiring campaigns. Particularly within tech, there’s lots of open positions out there. Organizations like mine have lots of open headcount for talent in this area. Bringing people in and training them up is important.

There’s a rule of thumb about talent within organizations. A third of your talent is rarer to go with regards to transformation. A third of your talent can get on the journey and a third of your talent can be very resilient to that. If you believe that two-thirds of your talent is ready to go on the journey they’re dying for that internal training, they know the organization, and they’re well-connected within the organization.

Okta Outlines Growth Plan for Serving the Enterprise

Discussion with the NY Enterprise Tech Meetup pulls the curtain back on a startup evolving into a public company that works with larger players.

At the first 2021 virtual session of the New York Enterprise Tech Meetup, Todd McKinnon, co-founder and CEO of Okta, spoke about his company’s journey scaling up from its first customer to learning how to serve the enterprise space. Okta is a secure identity management provider that works with more than 9,400 customers such as JetBlue, Slack Technologies, and T-Mobile.

McKinnon spoke this week in an online fireside chat with Jessica Lin, co-founder and general partner at venture capital firm Work-Bench, which hosts the meetup. Lin described McKinnon as a tech-oriented founder who came from Salesforce where he served as senior vice president of engineering before co-founding Okta. She asked him how he got the ball rolling to attract customers to Okta though he had limited experience with enterprise sales.

Image: sasun Bughdaryan - stock.Adobe.com


“It was a deliberate plan to start a company where that background would give me an advantage, not just in the ability to build the product but in selling to the first 10 customers,” McKinnon said. Attracting those early customers, he said, takes a healthy amount of evangelism.

“You’re trying to convince the customers to do something that’s totally different, totally radical.” That novel element is necessary for startups looking to serve the space, McKinnon said, otherwise companies would turn to incumbents such as IBM or Oracle.

Okta’s focus on security in the cloud, coupled with his experience with technology, offered some creditability, yet the company still started small with its very first deal at just $400. McKinnon also said Okta went through other growing pains most startups face, such as a pivot from a previous idea for systems reliability monitoring services for cloud apps.

Moving fast to get to market with the first customer of Okta’s current incarnation as an identity management provider meant the product was still very fresh and raw. “It barely worked,” McKinnon said jokingly.

That was just six months after Okta’s seed round, he said, and the startup took on feedback about its product. “The hardest thing about the early days is … you’ve got to believe in it even when you don’t believe,” McKinnon said.

Lin noted that Okta led off with the small- to medium-sized customers but caught on to larger enterprises feeling the pain point the company sought to serve. McKinnon said Okta used its small size at the time to show big companies it would essentially “run through walls for them” in order to meet clients’ needs.

He said that approach proved more effective than trying to emulate much larger rivals. “That resonates with buyers at these companies,” McKinnon said, compared to vendors at more established rivals approaching it as a basic, nonchalant transaction.

Fielding a question on trying to work with tight-lipped financial enterprises that are reluctant to talk security with new vendors, McKinnon said Okta was initially seen as a player in employee productivity rather security. “Partly that’s because I think we were scared to have the security conversation,” he said, which might lead to the customer doubting the security of the cloud.

That changed with the ubiquity of the cloud, McKinnon said, and the need in the market for better authentication services. “Where we’re at today, probably 65% of our deals are led by the chief security officer,” he said.

McKinnon suggested early-stage companies that want to connect with enterprise customers leverage what personal expertise they have in-house to gain credibility.

Okta made the push into enterprise clientele about when it reached the 50 to 100 customer threshold, he said. The company did some custom projects for a few clients, including connecting to on-premise systems, McKinnon said, but such bespoke work was not Okta’s focus. It still took some time for the company to finding its footing selling its vision to others. “I can think of a bunch of deals we walked away from because we didn’t have the roadmap,” he said.

McKinnon said, in hindsight, he might have taken more time early on to lay out Okta’s vision and roadmap for its customers but the company now has a clear focus. “We’ve really evolved into this platform message, which is we’re going to be the one identity thing for all of you use cases,” he said.

Gartner on Drivers and Deterrents to Cloud Adoption

Conference sessions shows midsize enterprises weigh costs and uncertainty about the cloud against potential time savings and efficiency.

At the virtual Gartner 2020 IT Infrastructure, Operations & Cloud Strategies Conference held this week, Mike Cisek, vice president analyst with the midsize enterprise research team at Gartner, examined some of the key drivers and deterrents that influence cloud adoption among midsize enterprises.

Based on Gartner’s surveys, currently only 10% of IT budgets in midsize enterprises is dedicated to cloud. Cisek said midsize tend to have a rich legacy environment that is often not yet diving deep into cloud resources. Constraints such organizations face can include staffing limitations, cloud architecture, and cloud engineering teams. He said such companies might look at cloud with a broad definition to include infrastructure, platform and software and want to know how best to make cloud decisions. “This is about migrating legacy or deploying new services to cloud,” he said.

Image: yingyaipumi - stock.Adobe.com

Companies look to cloud adoption for prescriptive purposes that must show some sort of demonstrable benefit, Cisek said, which could be app modernization, workload migration, or deployment of a new service. This is because of the finite budgets and small pool of resources available to such organizations. Midsize enterprises likely do not have dedicated operational teams that works separately from cloud matters, he said. “Operational folks are also doing the modernization and transformation tasks; very different than you’d see in large enterprise.”

Size and scale can affect architectural decisions, Cisek said, for midsize enterprise. Those decisions tend to be based around the cost of migration, which typically determines where workloads are placed. For midsize enterprise, Gartner predicts 60% of their workloads will remain on-prem through 2023. Cisek said that 60-40 split may be with cloud and could include edge. “We’re talking about small-scale, highly virtualized environments, single hypervisor solutions,” he said.

Uncertainty about cloud can be a deterrent factor, Cisek said, among organizations that lack dedicated security resources to accommodate it. Their concerns can include worries of exposure that might be created through a cloud-based environment after years of progress with their existing security posture.

Cost is also a factor in cloud adoption, he said, along with revenue uncertainty brought on by the pandemic. “Migration costs are high,” Cisek said, pointing out that companies are not going to modernize just for the sake of modernization.

There are signs that some costs are coming down, but not fast enough at scale, he said. If tools improve to help midmarket IT leaders more accurately forecast what cloud migration models are going to look like, Cisek said this may speed things up. He also said on-prem might be eliminated in the future regardless as an option by vendors. “They may be put into a situation where they simply can’t afford to maintain existing architectures,” Cisek said.

Three drivers stood out in a survey Gartner conducted, he said, as reasons why midsize enterprises look to cloud adoption:

  1. The elimination of time spent on hardware upkeep and maintenance ranked first among survey responses. Cisek said many organizations that listed that driver for cloud adoption often had dated application architecture, older infrastructure environments with disparate life cycles, and not much in documentation on the system. “For the most part, it’s consuming a lot of time just to keep existing production workloads alive,” he said.
  2.  Disaster recovery was the next top motivation for cloud adoption, according to the survey. While the intent of disaster recovery may be fundamentally understood, Cisek said conventional approaches to this have not been cost effective for many midsize enterprises. Alternatives are available now that fit such organization’s needs, he said, from vendors, collocation providers, or hyperscalers. “There’s a lot of solutions out there to solve the problem, which is a welcome change,” Cisek said.
  3. Gains in business and IT operations made up the third, top driver for cloud adoption, he said, based on survey responses. There is a rise in data-driven culture, Cisek said, where midmarket IT leaders must get data into the hands of people who can make decisions that affect business outcomes.

A need exists among midmarket IT leaders to show what technology can do for their organizations, he said. Time saved was the leading, key performance indicator to measure return on investment in a separate Gartner survey of CIOs conducted in 2019, Cisek said, which speaks to trends playing out with cloud adoption and IT leaders. “They’re being asked to look for opportunities to increase business operations and efficiency as well as IT operations and efficiency,” he said.

CIOs: The New Corporate Rock Stars

Here are five practical ways technology leaders can meet emerging obligations.

Timing is everything as the expression goes, and for CIOs, they are once again becoming the “rock stars” of the business world. When chief information officers last took center stage to this extent, Napster had just made its debut and Star Wars fans were flocking to movie theaters to catch Episode I: Phantom Menace. That was back in 1999, when CIOs were asked to step up and save enterprises from Y2K.

Today, they’re once again being called on to lead their businesses with a different type of bug in the air.

In the early days of the COVID-19 pandemic, CIOs played a key role in the C-suite leading the corporate response. They drove their businesses forward in the rapid adoption of cloud, analytics, security and artificial intelligence and as a result, countless companies were able to quickly pivot to distributed workforce models, scale up e-commerce operations and shore up supply chains in a matter of days.

Image: portokalis - stock.adobe.com


The growing importance of the CIO won’t change once the crisis is over. From here on out, we can expect the CIO function to become the engine for business transformation in companies across industries. As this happens, CIOs will need to adapt to ensure they can deliver fully on their new obligations — and they need to do so quickly. With vaccines on the horizon, it hopefully won’t be long until the recovery can start in earnest and businesses will once again look to their CIOs for leadership. There are five things CIOs need to focus on to ensure they’re ready when the call comes. I call them the five ‘Rs’.

1. Resilience

We’ve long known that digital transformation can help with business revival and growth. What’s changed is that all businesses now need to put digital transformation into practice as a matter of urgency. This is something the CIO community is aware of: Recent Accenture research suggests that 74% of CIOs believe their companies will need to rethink their processes and operating models to be more resilient.

Moving forward, CIOs will need to accelerate their organization’s technology strategies to respond to customer demand — which has never been less predictable — and safeguard the company’s future. It’s pretty clear from looking at the companies that have done well during the pandemic from a resilience perspective that the cloud will be core to this effort.

2. Restructuring

If CIOs are to deliver fully on the needs of the business, then the ways in which they work will need to change. Our research shows 77% of CIOs expect significant shifts in work design, culture, and mindset. Specifically, much better collaboration between the business and IT is required to ensure that the latter aligns with the needs of the former.

Here, leading CIOs will adopt collaborative, cloud-based virtual spaces that are resilient to the core. We can also expect to see a much greater use of agile work environments where, led by the CIO, IT can work hand-in-hand with the business on rapid software deployments to meet new business needs as they emerge.

3. Reinvention

Accenture research found nearly three-quarters (72%) of CIOs feel that their company will fundamentally change the way it engages and interacts with customers. To ensure the business is fit for the next phase, CIOs will need to focus on enabling digital and as-a-service products, real-time channel integration capabilities, and next-generation digital customer experiences.

Cloud platforms are a key part of this reinvention. The CIO’s role will be to find the right combination of these platforms, install them, and then build best practices from a business process standpoint. So, nothing short of re-engineering the business.

4. Reskilling

Our research found that 61% of CIOs expect to grow the percentage of their IT workforce dedicated to innovation to help rebound from the pandemic successfully. This is a step-change from the past and the CIO will need to reskill the workforce and enable them to be task focused. In the future, CIOs and their IT teams will spend much less time on systems and infrastructure maintenance, coding and other operational support tasks, and much more on innovating inside the company with technology as the driver.

5. Reduction

The final R is one that CIOs are already familiar with: the need to drive cost efficiencies. What’s different now is that CIOs will look to reduce costs or spend more efficiently while simultaneously driving innovation and business transformation. This will be made possible through the efficiencies to be found in cloud-first operational models and through the reskilling efforts referred to above.

As the new rock stars of the business, CIOs are increasingly being called in to the board room to offer their advice and expertise. The question they’re being asked: “how can we power a transformation of our business?” The 5 ‘Rs’ offer the answer, and I for one can’t wait to see CIOs elevate their role further in the years ahead.

Google Cloud’s Penny Avril on Preparing for the Unexpected

Cloud resources must be prepared to adapt to tumultuous events, from sudden surges in online shopping to the repercussions of the pandemic.

Keeping operations as disruption-free as possible is a necessity for enterprises though that might seem more complicated with resources running in the cloud. Disasters, wild fluctuations in market demand, and the general upheaval of adjusting to the pandemic could raise questions about the ability of cloud native databases to adapt. No one wants the transformation strategy to implode in the middle of a storm.

In an interview with InformationWeek, Google Cloud’s senior director of databases, Penny Avril spoke about making sure that unforeseen events do not cause disarray with resources that have been entrusted to the cloud.

How do you prepare for the unpredictable? How do you make plans for unknown issues that may come up?

“In dealing with unpredictability, there are two key aspects here. One is being agile in terms of scale and capacity. It’s not just renting out machines. It’s being able to use those machines without experiencing any downtime. The other aspect really is the speed of developing new applications. One thought is how do we adapt the application we’ve got to greatly increase volume of traffic. The second is possibly the need for new applications.

Image: peshkov - stock.Adobe.com


“We’ve seen a lot this this year with COVID, in some cases like New York State Department of Labor that was managing unemployment claims. They saw a massive increase in traffic. I think they went from 350,000 sign-ons in a week to 6 million in the first week. Their existing database, which was an on-prem mainframe, just couldn’t handle it. They put one of our cloud native databases in front of it. They deployed this in a matter of days and were able to handle that volume.

“The other aspect, developing new apps. With COVID, which is obviously on everyone’s mind, we saw a number of COVID-related apps, such as the City of Madrid, which deployed in a number of weeks an app where their residents could track symptoms. That was more about speed of developing rather than being able to cope with increases in terms of volume and velocity of data.

Are there types of scenarios that you test for in advance? Do you map out potential issues to prepare a response? What thought processes and strategizing do you go through?

“We want unpredictable times to be a non-event. That is our end goal. Sometimes we know there are going to be changes in volume of traffic — Black Friday and Cyber Monday being obvious ones. The best news is it’s a non-event. How we plan for that is really in how we design these databases. They’re really designed for unlimited scale and scale that can be turned on, both up and down, without any interruption to the application.

“That is really our core design principle. One of the strengths that Google has here is these cloud native databases, such Spanner, Bigtable, and Firestore, they were battle-tested because Google services themselves. YouTube, Gmail, the list would go on — they had to have true unlimited online scale. Spanner is a unique database in that it is the only fully managed relational database that can scale horizontally without any limits. We prepared at the design phase of these databases.

Are there lessons that have been learned as you’ve adapted to the events of this year and other situations?

“One thing brought to our attention this year was the increase in customers wanting to move their databases to the cloud to advantages of unlimited scale with no downtime. We’re seeing what I would call more mainstream or enterprise customers move. They are very familiar in old guard technology; they use a lot of existing ecosystem tools. This is one thing that’s really come home to us. It’s almost like Google has solved the difficult problems here.

Penny Avril, Google Cloud

Penny Avril, Google Cloud

“We have these databases that have unlimited, globally distributed scale. We almost just need to make them slightly more accessible for users. We’ve done that in a couple of ways. The big way is to work with the developer tools, the ORM, object-relational mapping tool, so people have an easier time. Even migrating to these databases exists in apps or developed in new apps against them, easing the onboarding.

How is the velocity of data changing for 2021 and beyond?

“We’ve seen a couple of big trends. One trend is in the old on-prem world, customers had monolithic applications that not only had problems scaling but problems in terms of developing new features against these large monolithic apps. We’re seeing a move to microservices and Kubernetes. In terms of data volumes, at any one point in time at Google Cloud, we’re seeing over half a million Kubernetes pods connecting to Cloud SQL.”

5 Use Cases for the Next Great Frontier in IT Services

IT leaders should take note of the industry dynamics and these emerging use cases in order to get ahead of their peers in the next few years.

CIOs and IT leaders are looking to transform their business and enter the digital era, driven by technology advancement and cost containment.

The 2021 Gartner CIO Agenda Survey indicates that artificial intelligence (AI) and machine learning (ML), data and analytics (D&A), cloud/edge and the Internet of things (IoT) will all serve as game changer technologies in the coming year. However, CIOs today consider each technology discretely rather than find the glue between them to serve complex use cases.

Image: Argus - stock.adobe.com

This siloed approach will not bring the same value as when they are viewed as complementary to one another. By 2025, Gartner anticipates half of CIOs and technology buyers will utilize a nexus of cloud, edge, 5G, IoT and D&A technologies for their digital business needs.

Simply put, when combined in a meaningful manner, these technologies will drive the next great frontier in IT services.

Several use cases drive the need for using these technologies in tandem with one another, especially as CIOs accelerate digital transformation across their business and IT operations. These use cases may seem relatively novel today, but by 2025, each of them will become mainstream and shape IT spending, especially in the ongoing pandemic era.

The five use cases that illustrate the power of the nexus of cloud, edge, 5G, AI, IoT, and D&A are the following:

Smart factory

Gartner defines a smart factory — sometimes referred to as the Fourth Industrial Revolution or Industry 4.0 — as a highly digitalized and connected production facility that relies on smart manufacturing. The requirements in building a smart factory are operational efficiency, workplace health and safety, and collaboration with the product design unit at their manufacturing site.

The nexus of technologies come into play among next-generation smart factories, where services that modernize the manufacturing process and analyze the technical, economic, worker and environmental perspectives will be required. The addition of wireless technologies in smart factories allows for an automated alert system that identifies critical issues and faults that need immediate attention. AI can also be leveraged to build immersive augmented reality experiences, such as computer vision, to monitor worker safety at the factory site as an example.


Drones and mobile robots have become a common tool in numerous industrial, commercial and consumer settings. These devices are being used for site inspection, mapping, security, asset and inventory tracking, data collection, and more.

The requirements for building a drone include ensuring real-time access to control and telemetry, along with video/payload data from the drone fleet over a secure, reliable interface.

In order to support drones, technology service providers should build a ready-to-use, web-based ground control station application for real-time telemetry, control and video streaming with cloud connectivity over 5G networks. Additionally, they must offer AI platforms that provide a fully automated workflow for preparing datasets, training models and deploying trained models for inferencing. D&A will make way for the processing of large amounts of telemetry data, too.

Connected and autonomous vehicles

Automakers are increasingly incorporating cellular data connections into vehicles to unlock new opportunities that leverage edge compute, IoT, and cloud-based data storage and analytics. This includes vehicles to everything communication, which is the ability for vehicles to communicate directly with nearby vehicles often using 5G of Wi-Fi technology, as well as driverless vehicles that can operate without human intervention.

The nexus of technology will be present via cloud platforms that enables scalable, secured, high-quality digital capabilities, including navigation and fleet management services to its vehicles, among others. Likewise, services that allow for rapid data sharing and management will be crucial to many of the capabilities found in autonomous vehicles through 2025, reinforcing the importance of the increased speed that commercial 5G networks will allow for.

Remote healthcare

Remote healthcare provides patients in remote areas the ability to receive medical care from physicians and specialists at leading hospitals in distant metropolitan areas. During COVID-19, the use of remote healthcare is even more acute in order to support real-time doctor interaction and immersive video collaboration for telemedicine.

Strong 5G connectivity and cloud-based video conferencing are essential enablers of remote healthcare, but IoT and AI/ML have a role to play here as well. IoT technologies must be used to monitor endpoint medical devices, such as tablets and other critical hospital equipment, in order to monitor that there are no impending defects to guarantee zero time. AI/ML can be leveraged to look for patterns in historical patient medical data to make real-time decisions on medical procedures and protocol.

Smart retail

Smart retail is a term used to describe a set of smart technologies that are designed to give the consumer a greater, faster, safer and smarter experience when shopping. With the proliferation of smartphones and e-commerce, the retail industry has turned to smart retail in order to bring an experience to the buyer that is content-rich, near-instant, real-time and dynamic with the ability to cross-sell and upsell.

Hyperscale cloud providers and telco providers are developing new solutions such as smartphone apps that track in-store customer movement that is used to send targeted information about new product arrivals and discounts. They even offer maps of large department stores. Additionally, applying D&A to information collected from IoT sensors throughout stores will create a personalized buying experience for the consumer.

Taken together, this nexus of technologies will better meet customer needs, provide more insights for CIOs, and lower enterprise costs for end-to-end delivery. IT leaders should take note of the industry dynamics and these emerging use cases in order to get ahead of their peers in the next few years.

Andy Jassy: Speed is Not Preordained; It’s a Choice

AWS CEO talks cloud, the changing landscape of enterprise IT, and the need to be maniacal and relentless to drive reinvention.

In his keynote to kick off this month’s virtual AWS re:Invent conference, CEO Andy Jassy discussed tangible shifts AWS saw in the enterprise IT landscape and decisions organizations might need to make for successful transformation. Citing AWS’s $46 billion revenue run rate, he said company continues to experience accelerated growth rates. “That growth is significantly driven by the growth of cloud computing in the infrastructure technology space,” Jassy said.

Despite such gains, AWS is part of a much broader IT global segment, he said, where spending on cloud is just 4% of the overall IT market. Jassy said AWS believes the vast majority of computing will move to the cloud in the next 10 to 20 years. “It means there’s a lot of growth ahead of us,” he said.

That growth might build from moves organizations made out of necessity in 2020. The onset of the pandemic compelled most companies to try and save money, Jassy said, which included rethinking their plans. Many enterprises, he said, went from just talking about migration to forming real plans. “When you look back on the history of the cloud, it will turn out that the pandemic accelerated cloud adoption by several years,” Jassy said.

Andy Jassy, CEO, AWSImage: Courtesy of AWS

Andy Jassy, CEO, AWS

Such changes in thinking can speak to the overall need to reimagine and survive, he said. Looking at companies listed among the Fortune 500 in 1970, only 83 of those organizations remain ranked, Jassy said. Of the companies named in 2000, just half remain on the list. “It is really hard to build a business that lasts successfully for many years,” he said. “To do it, you’re going to have to reinvent yourself. Often you’re going to have to reinvent yourself multiple times.”

Part of reinvention is building up muscle within the organization to increase the speed of change, Jassy said, regardless of how huge the company might be. There are a number of leaders at enterprises, he said, who resigned themselves to move slowly because of the nature of their culture and massive size. “Speed is not preordained. Speed is a choice,” Jassy said. “You’ve got to set up a culture that has urgency and wants to experiment. You can’t flip a switch and suddenly get speed.”

There can be a tendency, he said, for companies to pursue reinvention only at desperate times, when they may be on the verge of collapse. Waiting until such a point to explore change can mean the results will be a crap shoot, Jassy said. “You want to be reinventing when you’re healthy,” he said. “You want to be reinventing all the time.”

Successful reinvention can come from an organization’s reinvention culture, Jassy said. It can also come from knowing what technology is available. He said leadership to invent and reinvent are essential, citing the strides made companies such as Airbnb, Peloton, and Stripe in evolving their respective market segments. “Huge amounts of invention have gone into reimagining these spaces,” Jassy said. “If you’re going to be a leader that reinvents, you have got to be maniacal and relentless and tenacious about getting to the truth.”

That includes knowing what competitors are going in the market, he said. It is also vital to know how customers regard the product, Jassy said. Knowing what works and what does not can run into internal barriers, he said, if there are individuals who keep information hidden. “You will always have a lot of people inside the company who try to obfuscate that data from you.” They might be motivated by self-preservation, he said, or believe that restricting information is a beneficial move.

It takes courage, Jassy said, to force an organization to change despite such reluctance. He cited Netflix cannibalizing its own DVD rentals in favor of its streaming service as an example of evolving with market momentum. Likewise, Amazon shifted in the late 1990s from an owned-inventory retail business to also offer third-party sellers’ products the way EBay and Half.com did, he said. “We did it because we know you cannot fight gravity,” he said, acknowledging that some market forces cannot be stopped. “You’re much better offer cannibalizing yourself than having someone do it to you.”

Talent obviously can be a significant factor in how and when organizations embrace transformation, Jassy said, with new blood often leading efforts to push for invention and reinvention. This can stem from incumbents within the company being reluctant to tear down systems and processes they built and then get trained to work with new technology and resources. He suggested encouraging fresh thinking to drive changes that speak to tangible needs that can help the organization evolve. “You want builders and talent that’s hungry to invent,” Jassy said, as long as they try to solve problems for customers rather simply chase technology they believe is cool.