Skip to content

Operations Optimization Helps to Drive Digital Transformation

The efficiency and effectiveness of your operations can be the difference between failing and scaling your business. But how do you know if your current operation is performing optimally?

Focused Operations Optimization

Male-Project-Manager-Standing-in-High-Tech-Development-Facility
Successfully reaching that destination requires a keen focus on creating greater competitive power, targeting Customer Engagement, Workforce Enablement, and operations optimization.

Have you ever used the application WAZE for navigation? It’s a useful tool, but if you don’t have the settings right it can feel like you are driving 100 miles to reach a destination that is only 25 miles away, taking every side road and turn to get there. When that happens, I pull over, take a breath, check the settings, and review how I want to get to the destination. While this modern method of navigation may have its quirks, it sure beats feeling like you are wandering and will never reach your destination.

The journey to Digital Transformation bears some similarities. Successfully reaching that destination requires a keen focus on creating greater competitive power, targeting Customer Engagement, Workforce Enablement, and Operations Optimization. All of these require speed, agility, and organizational changes to make significant impacts.

Sometimes, however, the transformation effort itself needs transformation. Often, large efforts reach an inflection point at which they aren’t scaling fast enough, or they are beginning to go a bit off the rails. In these cases, optimizing operations is essential to ensuring the success of the digital transformation process. In most of these cases, large firms have super-talented teams engaged in the transformation efforts. But when they find themselves bumping up against unforeseen problems, taking a pause to regroup and refocus — just like on a road trip — can be beneficial.

This scenario frequently occurs in transformation efforts. The following three case studies illustrate how operations optimization efforts are the foundational and key elements for a successful digital transformation:

  1. Case Study 1: Optimizing Modern App Development
  2. Case Study 2: Scaling Source Data for New and Innovative Analytics Capabilities
  3. Case Study 3: Recovering from a Data Derailment

Optimizing Modern App Development

A man using a stylus pen to draw on their tablet with a data concept overlay
RCG recently was engaged to help one of our clients in the travel and leisure industry dramatically improve their quality assurance operations on a large digital transformation initiative.

This digital transformation project grew very quickly, ramping up to over 1200 people in less than 18 months. The transformation included more than 40 Scrum teams building web and mobile applications and a slew of microservices to integrate with the existing enterprise systems.

This transformation project is also being used to drive process changes into the client’s organization. While the client was successful in building their target applications and getting them into the marketplace quickly, they recognized that more rigor was required to sustain the pace of innovation and delivery over the lifetime of the applications.

At the client’s request, RCG performed a detailed review of the existing quality assurance and testing organization and prepared a set of recommendations that we are currently implementing. The result will be a move from a chaotic release-driven process to a more predictable and scalable release process that delivers high-quality code every time.

Step 1: Stop the bleeding

It is critical to minimize the impact of any organizational change on the work in process. Schedules have been created and commitments have been made potentially months or years in advance that may be difficult to change. It is impractical to assume that everything can be fixed in time for the next release. After all, many dysfunctions in an organization can take years to evolve; changing them overnight is not realistic. Instead, it is important to evaluate the potential risks that are being raised in the current processes and then work to mitigate the worst of them to the extent possible. In many cases, simply performing a detailed risk assessment is an eye-opening experience.

Imagine, for example, discovering that your payment gateway testing processes only test 10% of the possible test cases. The range of potential impacts can be significant – missing payments; applying the wrong payment amount; not handling payment processing errors correctly – the list can be daunting. But at least understanding that there is an issue in the payment gateway testing allows the organization to do some triaging, focusing on the worstcase scenarios first, and gauging the level of acceptable risk they are willing to take until the greater problem can be resolved. Having knowledge of the gaps in the testing capabilities can also help justify the fortification of other areas to defend against other unforeseen problems – i.e. adding additional checks and balances in manual processes.

A big part of this first step in the process is communicating the unpleasant truth to management – their baby is ugly and will continue to be so for the immediate future. The most important goal for this first step is to understand what you don’t know and prioritize the investments that need to be made to fix the problems and move more toward a predictable release process.

Step #1 Key Deliverables:

  • Developing a plan for improving the health of the quality assurance process based on a detailed analysis of the organization and its touchpoints
  • Identifying areas of highest risk and communicating plans for mitigating them
  • Implement an immediate system of checks and balances that “gates” the current processes to provide an extra set of eyes on things until the longer-term fixes are in place

Step 2: Restructure the organization

In many cases, the quality assurance organization has grown organically based on the people and the tactical work that is being done on a day-to-day basis. This approach works while the organization is in “start-up mode” but tends to implode when costs begin to spiral out of control due to poor code quality or inconsistent quality assurance practices. During our in-depth assessment of the organization, we focused on understanding the long-term product roadmap versus the daily tactical activities. This is important for a couple of reasons. First, if the product owners have some advanced features planned that will require testing capabilities that don’t yet exist, it provides an opportunity to get engaged early on. Second, it can provide advance notice of the need for additional training or more capacity requirements.

In our case, the restructuring of the quality assurance organization also included developing an understanding of the skills, cost, and physical location of EVERY member of the quality assurance organization. This is essential when there is an offshore component to the quality team. Being in the same city is not the same as being in the same building or room.

Our goal was to co-locate the quality assurance resources as close to the development teams as possible to facilitate direct conversations and learning. We also interviewed EVERY member of the quality assurance team to understand their skills versus the needs. In many cases, there was a gap in the skills compared to the stated goal of increasing test automation coverage.

Another concern that we identified and evaluated was that each product group essentially had created their own quality group and processes to meet their specific needs without consideration for the integration into the larger overall quality assurance organization. Consolidating the multiple sets of quality processes and building the correct organization to replace those ad hoc organizations was a key driver of the organizational changes.

Step #2 Key Deliverables

  • Evaluation of existing quality assurance organization that includes skill sets and locations
  • Proposed organization structure that builds standards and processes into shared services while maintaining team level autonomy for agile development

Step 3: Modernize Standard Processes

Building out a shared services team that can develop and enforce standard processes across the various scrum teams is essential to improving overall product quality. Many organizations believe that agile development means that everything must be done on the fly. But in truth, agile requires much more planning and organization to enable the teams to develop on a cadence and release on demand. Development scrum teams do not have the time to create things like automation frameworks, test case analysis processes, or test data management strategies that can work for more than their immediate needs. Modern application architectures also require thoughts around performance and security testing, accessibility testing, localization, and continuous integration pipelines. The creation of a solid shared services capability is crucial in enabling development teams to focus on development.

Step #3 Key Deliverable

  • A shared services organization that can support test data management, test automation, and QA/QE best practices

Step 4: Start your journey with quality

One of the most important steps to improving the overall code quality of an organization is to get people to recognize that code quality is not something that can be tested into a product. It must be designed in, beginning with the creation of the user stories. The quality assurance organization must be able to educate the product and engineering groups on what it takes to design requirements that make it easier to create test cases that test the system properly.

Too often, the product folks come up with an idea that they like and utilize in writing a bunch of user stories but fail to consider how the epic level end-to-end testing will be accomplished. The quality assurance organization should be responsible for establishing process gateways that begin during the product definition phase. Unless a story is testable, it is NOT ready to be built. The acceptance criteria for a story should include some guidance on test data requirements as well.

Here are some of the important gateways that the quality assurance organization must be empowered to establish in supporting the enterprise in achieving its goal of improving code quality:

  • Approval of the creation of the user stories. This ensures that the stories meet specific levels of completeness PRIOR to sending them to development for implementation. Typical criteria for this gateway include proper acceptance criteria, proper scoping of the stories, and ensuring that the story can be testable.
  • Verify that unit testing is done before accepting code into QA for testing
  • Verify code reviews have been performed
  • Verify the documentation matches the actual as-built configuration

Building quality products is more than testing what was built; it also requires validating the entire process leading up to the first line of code being created.

Step #4 Key Deliverables

  • Creation of quality gateways to provide guard rails for product management and development to work within
  • Enforcement of the gateways until the new behavior becomes ingrained and self-governing
  • Continual monitoring of team performance and code quality to identify areas of improvement

Step 5: Change the attitude about the Quality Organization

In many organizations, the quality assurance organization is viewed as the “business prevention unit” – a group that has nothing better to do than to slow things down. Most of the time that perspective is earned because QA is never involved in any of the upfront discussions on the products being built. They are handed a black box at the end of a long secretive process and told to test it to make sure it works. In many cases, QA becomes a forensic exercise. When something slips thru the QA process, it is not just a failure for QA, it is a breakdown of the entire system.

Step #5 Key Deliverable

  • Develop a plan that includes quality in every product development and delivery discussion (the quality assurance organization should become a partner to the entire organization)

Results: App Development Optimization with Modern Quality Assurance and Software Testing

As a result of our quality assurance remediation efforts, the following steps were implemented in optimizing the app development operations:

  • Eliminated status reports that took a group of people hours to create every week (reports that no one read) and replaced them with updated dashboards in Jira that contain accurate and up-to-date information that people can use at their leisure
  • Reviewed and replaced thousands of test cases with fewer test cases that provide better coverage
  • Replaced nearly a dozen different test automation frameworks with a unified testing framework that incorporates the best ideas from the others and standardizes best practices for automation across the program
  • Elevated the quality assurance organization to an equal partnership with product development. The results have reduced the number of defects leaking thru to production significantly by improving the processes that occur PRIOR to code reaching quality assurance
  • Initiated an active outreach program to the rest of the organization. The current quality assurance director has met nearly everyone in the entire IT organization and has created an open-door policy that has improved the intergroup communications significantly

Scaling Source Data for New and Innovative Analytics Capabilities

Profile-side-view-portrait-of-attractive-smart-clever-long-haired-girl-editing-script-database-source-at-workplace-workstation-indoors
A global supplier to the financial services industry was expanding both data infrastructure and IoT infrastructure to support a new connected services strategy. I

In support of this strategy and the volume of data expected, the Enterprise Data and Analytics team was tasked with providing a scalable and flexible data solution built on cloud infrastructure architecture. The goal was to enable the processing of real-time streaming IoT and service data, along with the ability to store and process historical service data spanning several years.

Short, Near, and Long-Term Milestones

RCG provided thought leadership around cloud migration strategy and architecture. We started with a batch and streaming pilot on small data sets moving to Azure on a 30-day, 60-day, and 90-day milestones. RCG was brought in for the last week of the first 30-day initiative to accomplish:

  • Batch ingestion from Teradata to Azure ADLS Gen 2 using Azure Data factory
  • Streaming ingestion from on-premises Message Queue (SOUP) to HDInsight’s Kafka on Azure
  • The set-up of basic architectural guidelines

Clearing Roadblocks

The initial two-week effort included clearing multiple roadblocks and achieving key goals such as:

  • Designing network topology and architecture for One premise to Azure communication
  • Designing and allocating subnets and CDIR ranges for Batch, streaming and analytics clusters
  • Setting-up subscriptions to track chargebacks and costs for infra team
  • Installing HDInsight’s Kafka and Batch clusters
  • Opening firewall ports for Kafka Producer and Consumer on-premises VMs to produce Kafka messages
  • Developing producer and consumer code to subscribe to SOUP and publish to Kafka and sample consumed messages to ADLS Gen 2 and Cosmos DB
  • Configuring Kerberos client on producer VMS on-premise, setup routes to ADDS KDC from an on-premise cluster, in its Kerberos ticket from onpremise to Azure ADDS
  • Creating connectivity from on-premise Teradata server to data factory, and installing data factory integration run time to pull batch data into ADLS Gen 2
  • Creating data pipelines for the efficient delivery of data to business sandboxes
  • Automating and templatizing infrastructure for business sandbox creations
  • Automating report generation and creation in Power BI and data on ADLS Gen 2

Results Matter

The very point of Digital Transformation, of course, is the realization of many sought-after real-world business benefits. In this customer case, the RCG team was able to deliver a number of those benefits, including:

  • Streamlined data delivery: The services team was able to deliver data to business users in a much more streamlined and reliable manner
  • Analytics sandbox capability: The customer can now work with the entire enterprise data, but using their own cloud sandbox for advanced analytics and specific use cases
  • New internal and external customer application onboarding: The customer can now create and launch embedded power BI-based reporting applications much faster, and specific to a business need, from a huge data set created in ADLS Gen 2
  • Faster time to market: IT typically takes months to years in delivering data to business. With the cloud migration and automated data delivery to the cloud, the customer can create data solutions much faster
  • Efficiency: The customer has found increased efficiency in requests for new data availability, analytics tools and visualizations

Recovering from a Data Derailment

Young-woman-is-thinking-about-the-statistics
A large supplier to the food and beverage industry was focused on modernizing their logistics operations.

The first step was the consolidation of data from multiple sources into a common platform to prepare for leverage within their advanced analytics environment.

During the build-out process, it became clear that the provider they initially selected to complete the build-out was struggling to deliver within the necessary timelines. RCG was called in to assess the project status, along with the architecture and key elements of operating the new data lake environment. Our consulting and engineering team utilized RCG’s proprietary Operational Efficiency framework to get this project back on track and ensure operational efficiency well into the future.

5 Key Components of Optimized, Efficient Data Lake Operations

The process of optimizing data lake operations to achieve peak efficiency requires the completion of five essential steps:

  1. Establishing a formal intake process
    • Identification of new data sources
      • Log request (source details, access details, acceptable timeframes, destination details)
      • Assess business needs
      • Document business access requirements and security approaches
    • Prioritizing Activities
      • Determine priority (vs. other requests)
  2. Determining ingestion pattern
    • Identification of ingestion pattern (based on)
      • Source system type – ERP, CRM, Operational, etc.
      • Database/repository type – Oracle, DB2, etc.
      • Table type – structure, keys (primary, foreign)
      • Data types – type, special handling, cleansing/harmonization rules
      • Data classifications – public, private, restricted, confidential, etc.
      • Usage profiles – Applications, Reports, Analytics, Data Science, Direct to users
  3. Developing efficient ingestion processes
    • Determine existing or new workflow
      • A workflow controls the overall processing of data from source to “usable” data lake table
    • Develop new or edit existing workflow
      • Create new workflow processes
      • Run tests of workflow
      • Validate process
      • Operationalize process
  4. Monitoring scheduled workflows
    • Establish or task support team to monitor workflows during each shift
      • Monitor workflows that are scheduled to start or end during shift
      • Report on the status of workflows
      • Handoff exceptions or issues to next shift for follow-up
  5. Enabling business data usage / self-service capabilities
    • Facilitate access to new data sources
    • Support data catalog, business glossary, and search capabilities
    • Facilitate self-service analytics

While the customer found it necessary to pause for a short period of time to reevaluate and re-tool the project, they were able to quickly get back on track. And now, they have the right architecture, component, and processes in place to monetize this investment.

Operations Optimization is Key to Successful DigitalTransformation

adobe-stock-202936765-8
While Digital Transformation is all about speed, agility, and innovation, it must be architected correctly at the start — or a pause taken to reassess.

This process enables scaling and operating efficiently and facilitates moving on to the next element of the transformation with peace-of-mind that the current operation is performing optimally.

A Foundation for Digital Transformation

Although we don’t want to practice digital incrementalism, digital transformation does require a solid foundation for agility, quality, and speed. And as each of the above customer experiences illustrate, operations optimization is key to successfully navigating the Digital Transformation journey.

A pause to refocus, regroup, and refresh can make all the difference in successfully completing a road trip. Similarly, pausing in the journey to Digital Transformation to ensure the optimization of operations can make all the difference in the successful completion of that journey.

Download a PDF version of this whitepaper by filling out this form

placeholder-800x450