Leap Frog from Mainframe to the Cloud

Related Topics: Cloud Engineering, Digital Strategy, Insurance

by Charles Sybert – 

More than 70% of the Fortune 500 companies continue to run business critical application on mainframes. Today, the digital age’s insatiable demands for real time data, interactive user experiences, and innovative products weigh heavily upon the insurance technologist’s mind. Surveying the application landscape and seeing legacy applications ranging from mainframes to client servers to cobbled together data highways is causing great concern and compounding the the challenge of meeting these demands.

Technologists are looking towards migrating the mainframes to the cloud, a term first coined in an office park outside of Houston at Netscape’s headquarters in 1996. The technology has evolved into cutting-edge applications from those humble beginnings, taking advantage of the near limitless computing, storage, AI, and ML capabilities. Not only will the cloud provide the capability to harness the latest technology frameworks, IT costs could drop as much as 50% by taking advantage of cloud application architectures compared to running the mainframes locally.

Moving the Mainframe to the Cloud

The mainframes were invented in the 1930s by Howard Aiken from Harvard University but became business focused machines in the 1960s with the widespread use of COBOL. The computing power of the Mainframe is astounding but can become very expensive. In general terms, using a 100 MIPS can cost up to $100,000 a year for an owned mainframe, but the same 100 MIPS in an Azure environment can cost up to $10,000 annually. Moving to a cloud-based platform makes financial sense and enables access to modern technology methodologies and dev op tools like Git Hub and the latest deployment platforms like Kubernetes, Docker, and Red Hat OpenShift.

Journey Options

The migration journey is unique for each company, but the common patterns include:

  1. Lift and Shift – first remediate any significant defects and poor performing code modules and then import the code into a cloud based infrastructure
  2. Re-Host – instead of a cloud infrastructure, migrate the code to a data center specializing in mainframe hosting and build the integration points between the center and your infrastructure
  3. Re-Platform – using the same code base but upgrade the operating system to take advantage of a more modern mainframe technology
  4. Re-Factor – using semi-automated tools and techniques migrate the legacy code to a modern programming language removing the need for legacy system support
  5. Update – optimizing the codebase to suit the business processes

Some companies will start at a lift and shift to get off the mainframe equipment and then execute a re-factor approach once they are stable. While others will begin with the re-factoring process with plans to modernize the legacy capabilities eventually.

Benefits

Migrating to the cloud mitigates risks and provides additional benefits such as

Application Outage – over time, the code base has evolved to support complex business requirements and data structures that can make the code difficult to support and change that could cause a production outage

Agility and Scalability – Being able to respond to the new demands for data and innovation using continuous integration and continuous delivery (CI/CD) on a near limitless resource infrastructure

Efficiency – automation of simple business tasks and systematic follow-ups offload work from an already overburdened staff

Data Access – real-time processing and access to data to feed business-critical analytics and AI

Closing the Skill Gap – as resources that worked on the code near retirement, newer resources do not have the training or the desire to learn an older language.

Change Impact

The mainframe migration presents unique challenges that must be addressed.

Testing – depending upon the migration journey, the level of testing must be adjusted. If the lift and shift method is used, then a simple end-to-end regression test with business validation of the core functionality will be adequate. If there is refactoring or changes to the code, a more thorough, detailed testing effort will be required to revalidate the capabilities. It is essential to focus attention on the upstream and downstream systems data consumption during the testing. With new integrations to a cloud, unintended data manipulation is possible during transmission. In addition, stress testing and performance testing are required to ensure the volume of data is transmitted and processed within business service levels.

Business Changes – Using a refactoring or update techniques, the user experience will have changed. Communication and training must be developed to inform the user of the change and expected behavior. This will require change management as most mainframe users have memorized the pattern and can quickly perform the transaction without looking at the screen.

Downstream Impacts – with code refactoring and other improvements the quality of the data, business transactions being performed and other business processes may have changed.  The downstream systems will need to adjust as the definition of data changes, business processes are now performed upstream, the need for data correction or manipulation may no longer be required.

Making it a reality

Before migrating the Mainframe to the cloud, a company-specific perspective needs to be built answering important questions such as

  • What is the level of investment we can sustain to move the code?
  • Does the code meet our business goals for the next 5 years?
  • What level of projected maintenance will it require?
  • What will additional support systems be required?

Depending on the return of investment and meeting strategic goals, moving the code to the cloud may make sense. On the other hand, it may be wiser to invest in commercially off-the-shelf software (COTS) that is already cloud-supported with modern technology capabilities to replace the aged mainframe technology.

RCG has developed a proven roadmap methodology to make technology investment decisions considering business objectives, existing technology stacks, trends in technology, and strategic goals. We would like to show you how this can bring clarity to the difficult decision of where  to go next.

Subscribe to get the Latest Updates

Enter your email address below to get the latest news and updates from RCG.