Why Financial Services Must Develop Deeper Customer Insights and Connections
Know Your Customer
When you watch the classic holiday film It’s a Wonderful Life, you get a very real sense of customer intimacy. The Bailey Building and Loan was a small company that served a tiny local market. The owners of that mortgage company shared the small community of Bedford Falls with their customers, regularly interacting with them outside of their business relationship. Though both the town and the little mortgage company were fictional, the relationship between financial institution and clientele, as portrayed in the movie, was quite realistic for the time.
But times change. As the decades have passed, the financial services industry has grown far more sophisticated and technologically advanced. And along the way, one of the most important elements of the financial services business has been lost: knowing your customer.
The Great Irony of IT Advancement
Advancements in this field have enabled financial organizations to grow much faster than previously possible. Financial institutions have been able to expand their territory, increase their product offerings, and boost productivity, all by breaking customer relationships down into discrete processes.
To support this “productivity gain,” numerous IT systems were developed. And, in support of those many disparate systems, the customer relationship morphed from a whole into many fractions of the whole, with each fraction of the relationship housed in a siloed system focused on a single task in a subset of the overall relationship.
The advancement of information technology has enabled the gathering and storage of more customer data than previously possible. Simultaneously, and rather ironically, the siloed storage and usage of this data has also resulted in financial companies knowing less about their customer, as an entity, than ever before.
Technology Provides a Solution
Until very recently, accessing siloed customer data to create a whole and comprehensive view of individual customers has been difficult and expensive. Data storage was costly, programming expensive, and, over time, the quality of the data in those separate silos inevitably degraded. But a fractured, less-than-whole view of the customer is no longer acceptable. The intense competitiveness of the financial services market, coupled with increased regulation, has created mounting pressure for financial services companies to more intimately understand their customers. And technology now provides a means of doing so.
Recent technological advancements have enabled importing siloed customer data onto a common platform, facilitating a reconstructed, whole view of the client — and very cost effectively.
But traditional, long-established financial services companies must act quickly in rebuilding the quality of their customer relationships. Emerging digital natives are leveraging this same technology to build their platforms from day one to facilitate a view of their whole customer relationship, to enable the fast identification of fraud, and for simplifying the management of internal and external compliance requirements. These competitors have combined with changing consumer expectations to pose a burgeoning and substantial threat to traditional financial services business models.
Initial Benefits of an Enhanced Customer Relationship
But for most companies — based upon our experience in helping financial services companies cultivate their customer relationships — the initial benefits typically include the following:
- Reevaluating Credit: Using traditional and non-traditional sources of information to identify behavioral patterns in more accurately determining credit worthiness
- Speeding-Up the Loan Process: Leveraging the more efficient use of internal and external data, coupled with algorithms and AI, to reduce the time to loan approval
- Enhancing Compliance: Using technology to automate auditing, ensure compliance, and produce required documentation more accurately and quickly
- Identifying Cross-Sell/Upsell Opportunities: A more intimate knowledge of your customer can be combined with technology in triggering outreach for new revenue opportunities
How Do You Start?
The optimal first step is to create a digital road map for the future, identifying the current and future needs of the organization based on internal end user input and external research on industry trends and threats.
Ideally, this first step will lead to a discussion on the architecture needed to support both near-term and future needs. While it may be tempting to seek a single software solution that can be easily plugged in to solve the problem, attempting to do so will only exacerbate the issue. Instead, rely upon the series of interviews and strategy sessions (sparked by step one) to result in the alignment of Business and IT Strategies, revealing a clear and cost-effective strategy for achieving the desired business outcomes.
Constructing a Unified View of the Customer
The first step is re-unifying the data to obtain a single view of your customer. This goal can be achieved by implementing Master Data Management. The objectives of this effort should be as follows:
- Analysis of the current customer data environment and business needs to identify gaps and opportunities
- Definition of the future state environment
- Identification of value delivery initiatives, desired business outcomes, and associated effort estimates
- Program schedule and resource requirements
- Detailed Actionable Roadmap and high-level initiative implementation plans
- High-level architecture diagrams depicting the evolution from the current state to the desired target state
- List of initiatives required to transition from the current state to the desired target state
- Gantt chart that depicts the sequence and duration of the identified initiatives
In this initial step an overall architecture is developed utilizing technology, tools, and frameworks based on information gathered during the road map development.
Big Data Infrastructure
The technological breakthroughs that are at the core of the ability to gain greater insight into customers has been called Big Data. This technology continues to evolve, and it is important for the technologist to understand the evolutionary path.
At its core this technology enables:
- Massive storage capability (Data Lakes) at economical costs
- Different types of data, organized in different ways
- Faster processing times
- Ability to apply mathematical models (algorithms) as well as Machine Learning and Artificial Intelligence to improve decisionmaking and formulate recommended actions
- Data quality management
The most common “distribution system” for big data platforms in the past five years has been the Hadoop distribution system — although several emerging open source systems are primarily based on cloud platforms.
Our experience with Big Data engineering, coupled with our understanding of your road map, will help form our recommendation of the distribution system that will be most appropriate for your organization. Once the distribution system has been selected, the next step is to select an infrastructure approach. There are essentially three approaches that can be used, each with its own pros and cons:
- IaaS – Infrastructure as a Service: A cloud-based service where the capital investment is made by others but your organization in responsible for many of the operational efforts
- PaaS – Platform as a Service: Also cloud-based and requiring no capital investment, but is more a full-service model
- On-Premise (or a traditional owned data center approach): You own the equipment, license the software, and are responsible for all operational aspects of the solution
Filling & Managing the Lake
Once you have selected your Big Data infrastructure, there are several other considerations for bringing data from disparate internal systems — as well as from possible external sources — and securing the environment. Each of these additional considerations is discussed in the following subsections.
Data Ingestion
Integrating data from Source Systems is a key starting point in creating a successful Big Data & Advanced Analytics platform. Ingesting data can be prioritized based on the rate at which data is consumed from the source to destination.
- Batch data ingestion: Used if the data must be ingested within hours to minutes
- Real-time data ingestion: Used if the data must be ingested within seconds or less
- Near real-time data ingestion: Used if the data must be ingested within minutes and less than an hour
Data Processing and ETL/ELT
Once data is ingested into the Data Hub, it must be curated into additional layers of the Data Hub by performing transforms and loads. The data transforms should be modular and reusable so that business rules can be externalized, and the flows can be generically orchestrated and reused.
- Batch processing: Used if the data can be processed within hours to minutes
- Real-time stream processing: Used if the data must be processed within seconds or less
- Near real-time processing: Used if the data should be processed within minutes and less than an hour
Data Visualization, Analytics, and Dashboards
Hadoop requires an analytics and data visualization tool that can connect directly to HDFS as well as to SQL-on-Hadoop technologies. A key consideration is identifying a tool that enables user self-service while allowing users to interact with data visualizations. The appropriate selection will enable achieving speed-of-thought performance on Hadoop. Combining and integrating data from multiple sources, making it appear as a single source, is important. You should also be able to correlate real-time data with historical data.
Mastering Data Tools and Frameworks
Mastering customer or product data involves matching, merging and deduping records at scale (millions); addressing data quality, integrity and consistency challenges; and enabling data lineage and governance. The process of mastering can be achieved using custom-built APIs or utilizing third-party algorithms.
RCG recommends utilizing Precisely data sets for name and address validation, in combination with a custom match, merge, and dedupe algorithm utilizing Trifacta to create visual rules around data that can be externalized / templatized.
Trifacta is a best-of-breed commercial data prep and data wrangling tool that significantly accelerates data profiling and business transformation rules definition. It enables user self-service and can be leveraged for data quality and data transformation. Custom algorithms (in this case Match, Merge, Dedupe) can be defined as “Recipes” and operationalized through Talend.
Precisely provides preeminent industry-leading global name and address data sets, while Trifacta excels in visually sampling and preparing data and creating rules (“Wrangling”) associated with the data. These rules can be generalized and tweaked during golden record creation and curation.
ACID Compliance layer (Hot Layer)
A key requirement of customer master data is to utilize an operational data store that is ACID compliant. The ACID compliant layer should be flexible enough for “Party” data modelling and to enable flexible, generalized, and dynamic relationships.
Some key considerations in choosing the technology stack for “Hot Layer” include:
Real time updates to customer master (Create Update Delete)
- Near real-time reads to the updates (Reads)
- Enriched relationship management Supporting all types of relations
- Support for Hierarchical model
- Customer
- Party
- Entitlements
- Organizations
- Product and Services
- Improved householding logic and other relationship analytics
- Access and maintenance of golden customer records
- Better data standardization/cleansing
- Better match merge and dedupe
Most of these requirements are fulfilled by Graph technology. RCG recommends using a native graph technology like Neo4J for this purpose. Following is a high-level comparison between different storage technologies.
A key requirement of customer master data is to utilize an operational data store that is ACID compliant. The ACID compliant layer should be flexible enough for “Party” data modelling and to enable flexible, generalized, and dynamic relationships.
Go Back to the Future with Your Customer Relationships
But thanks to advancing technology, it’s now possible to create and maintain customer relationships that are just as intimate and dynamic as those that George Bailey might have maintained with any of his customers. In essence, modern technology enables a return to the company-customer relationship that many of us recall fondly — if only through delightful fictional portrayals of those relationships.
Financial services companies that work to enable a warmer, more intimate customer relationship stand to profit mightily from an impressive range of benefits. And those that fail to do so will find themselves increasingly vulnerable to upstart competitors that, from their very beginning, leverage the intimate customer relationships that modern technology enables. Achieving these valuable, rich customer relationships through a modern data platform — a platform that RCG has labelled as Banking Customer 360° — involves several key steps:
- Knowing existing customer data
- Understanding business drivers related to customer interactions
- Understanding business processes impacting customer data
- Ultimately planning an implementation road map that would incrementally deliver business value while utilizing modern data platforms like Hadoop, graph, Kafka, microservices, and others
Is this the way that George Bailey would have fostered his deep and meaningful customer relationships?
Certainly not.
But then, he didn’t have access to the wonderful, modern technologies that would enable him to expand the depth and breadth of his customer relationships to a virtually unlimited scale, extending far beyond the geographic bounds of Bedford Falls.
You do.