With the emergence of new creative AI algorithms like large language models (LLM) fromOpenAI’s ChatGPT, Google’s Bard, Meta’s LLaMa, and Bloomberg’s BloombergGPT—awareness, interest and adoption of AI use cases across industries is at an all time high. But in highly regulated industries where these technologies may be prohibited, the focus is less on off the shelf generative AI, and more on the relationship between their data and how AI can transform their business.
With AI, financial institutions and insurance companies now have the ability to automate or augment complex decision-making processes, deliver highly personalized client experiences, create individualized customer education materials, and match the appropriate financial and investment products to each customer’s needs. It’s the most revolutionary technological development in at least a generation.
But it’s also fraught with risk. Institutions must design AI systems that are not only transparent, reliable, fair, and accountable, but also comply with privacy and security requirements, as well as align with human values and norms. This June, for example, the European Union (EU) passed the world’s first regulatory framework for AI, the AI Act, which categorizes AI applications into “banned practices,” “high-risk systems,” and “other AI systems,” with stringent assessment requirements for “high-risk” AI systems. Under the terms of the AI Act, “high-risk” AI systems require a compulsory self-assessment by providers, with certain critical applications (like AI used in medical devices) also subject to review under existing EU regulations.
Given the complexity of the datasets used to train AI systems, and factoring in the known tendency of generative AI systems to invent non-factual information, this is no small task.
There’s also the risk of various forms of data leakage, including intellectual property (IP) as well as personally identifiable information (PII) especially with commercial AI solutions. This puts the onus on institutions to implement robust data encryption standards, process sensitive data locally, automate auditing, and negotiate clear ownership clauses in their service agreements. But these measures alone may not be sufficient to protect proprietary information.
The AI Moment in Context
All extant AI solutions are “narrow” in the sense that they cannot approximate or surpass the cognitive capabilities of human beings: they’re unable to reason, reflect, or imagine, and they aren’t capable of genuine emotional understanding. That said, Generative AI and LLMs appear to do all of these things, producing original, “creative” outputs by learning from input data. ChatGPT, Bard, LLaMa, and BloombergGPT rely on a new type of neural network architecture, called a transformer model, which uses a special form of weighting to capture relationships and context across different parts of a sentence or sequence.
The reality of LLMs and other “narrow” AI technologies is that none of them is turn-key. Financial institutions implementing AI must grapple with the challenge of reshaping their core business process and application workflows, along with the difficulty of transforming their corporate cultures.
Perhaps the biggest challenge of all is that AI solutions—with their complex, opaque models, and their appetite for large, diverse, high-quality datasets—tend to complicate the oversight, management, and assurance processes integral to data management and governance. The way to manage this is by embedding data integration, data quality-monitoring, and other capabilities into the data platform itself, allowing financial firms to streamline these processes, and freeing them to focus on operationalizing AI solutions while promoting access to data, maintaining data quality, and ensuring compliance.
The Danger of Black-Box AI Solutions
We believe the best, most pragmatic solution for AI in financial services and insurance is what we call–“Trusted AI.” But before more is said about what this is, let’s walk through some of the issues that a financial institution needs to take into account when it considers a commercial AI service.
First, there’s the challenge of protecting one’s business-critical IP—e.g., proprietary data, business strategies, methodologies, etc. Storing or processing this information in an external AI service could inadvertently leak or expose these critical assets.
Second, there’s the problem of safeguarding PII, transaction records and other types of sensitive or confidential data. Even when backed by robust security measures, an external AI service is a tempting, outsized target for potential security breaches: each integration point, data transfer, or externally exposed API becomes a target for malicious actors.
Third, there’s the “black-box” element: viz., the design and behavior of a commercial AI service’s algorithms is usually proprietary, not to mention intentionally obscured. This lack of transparency makes it difficult for financial institutions to thoroughly vet and validate the AI service’s outputs against regulatory standards.
Fourth, AI-powered automation is most transformative when it’s embedded throughout an institution’s business processes and workflows. Because AI is so tightly interpenetrated with core processes, standardizing on a commercial AI service could lead to vendor lock-in, stifling innovation, placing significant power in the hands of a single vendor, constraining the institution’s ability to negotiate terms and prices—and ceding control over future decision-making.
Introducing “-Trusted AI”
Enter “Trusted AI.” Trusted AI is the ethos behind Enterprise AI across the organization, including Generative AI and LLM capabilities. Models are trained on a financial institution’s secure data, deployed and run internally, on their own infrastructure—or externally, in virtual private cloud (VPC) infrastructure, in the case of non-sensitive workloads. This not only ensures greater control and flexibility, but also helps safeguard the integrity of proprietary assets, like IP, while also providing enhanced protection for sensitive data while enforcing the rigorous security and compliance standards unique to the financial sector. And because an open-source AI model’s code is public, its inputs and outputs are understandable and explainable, ensuring transparency.
While it’s true that commercial providers currently dominate the AI space, the history of open-source software suggests this dominance will diminish—in this case, quite rapidly. Open-source AI isn’t just quickly catching up to OpenAI, Google, Meta, and Microsoft: mere months after ChatGPT’s debut, open-source AI models are almost as fast, in addition to being more customizable, affordable, and transparent. Just like the open-source system, database, and machine learning (ML) technologies of the past, AI models are narrowing the gap with proprietary alternatives at an incredibly rapid pace.
There’s one more thing. The foundation of Trusted AI is a hybrid data platform that is able to present a unified view of the data that’s distributed across a financial institution’s on-premises and multi-cloud environments. This platform uses AI and automation to abstract the complexity of data access, movement, integration, and analysis. By embedding intelligence at the data platform-level, it becomes possible to accelerate the pace at which financial institutions can operationalize AI solutions.
The combination of built-in data management and governance capabilities provides a solid foundation for firms to embed Trusted AI across their operations. In this blog series, well dive into the advantages of Trusted AI and the broader ramifications of AI adoption, exploring how financial institutions can bootstrap and evolve their AI strategies, from initial steps to what mature AI adoption looks like.
Let’s kick things off with a proposed Maturity Model for AI in Financial Services:
An AI Maturity Model for Financial Services
1- Foundational AI Integration
At this foundational stage, financial institutions begin by prioritizing open-source AI tools, understanding that commercial and cloud solutions can expose them to risks. The foundation of this stage is a hybrid data platform that’s capable of seamlessly integrating data across the institution’s landscape, while automating or accelerating common tasks.
- Deploy a hybrid data platform. Leverage open-source technologies on a hybrid data platform that automates or accelerates tasks like data ingestion, transformation, and schema design, ensuring that sensitive data and IP remain secure wherever the data is located.
- Basic Process Automation. Start with the low-hanging fruit, using open-source ML/AI to automate basic tasks, like transaction classification, basic fraud detection, daily reconciliation processes, and “first-level customer support responsiveness.
- Leverage open-source LLMs to design chatbots & digital Assistants, deploying 24/7 customer support bots built on open LLM frameworks.
- Train and upskill employees. Initiate basic AI training programs for staff. Develop workshops, e-learning modules, and hands-on sessions designed to familiarize employees with the fundamentals of AI and its applications within the finance sector.
2- Intermediate AI Integration
At this level, financial institutions and insurance companies build on top of a foundational hybrid data platform to tap deeper into AI’s potential, focusing on enhancing the user experience, promoting data-driven decision-making, and implementing robust cybersecurity layered defenses.
- Automate loan and credit decisioning. Go beyond traditional credit scoring, using AI to examine customer behaviors to predict creditworthiness and identify default behaviors. Models must be fair, responsible and remove bias to ensure that AI systems don’t inadvertently discriminate.
- Enhance the Customer Experience. Accelerate and/or automate routine processes like KYC verification, speeding up loan or underwriting approvals, and ensuring error-free account setups.
- Use AI to automate financial crime prevention. Create basic AI systems to detect potential fraudulent activities, monitor online financial activities, and discover system loopholes.
- Systematize governance. Leverage the hybrid data platform’s built-in capabilities to automatically monitor data quality levels and align with regulatory standards. Formalize rules, standards, and best practices that guide how data is to be managed and used.
- Create core feedback mechanisms. Establish initial channels for user and employee feedback to refine AI applications. For example, implement embedded feedback options in AI-driven apps, analyze responses using open-source Natural Language Processing (NLP) tools for continuous refinement.
- Facilitate communication between stakeholders. Enable reporting to internal teams about the statuses of AI projects. Create dashboards that highlight project milestones, challenges, and advancements, ensuring stakeholders stay informed and provide input.
3- Advanced AI Integration
At this stage of adoption, financial institutions and insurance companies engage more intensively with AI and its capabilities, extracting more valuable insights from data. The hybrid platform’s automation capabilities are crucial in this stage, allowing for more rapid adaptation and richer analytics.
- Push predictive analytics to optimize operations and enhance profitability. Leverage AI to analyze previously untapped data sources, such as social media sentiment, geo-location data, and customer feedback. Glean insights into customer behavior and market trends that also correspond to overlooked sales opportunities. Identify activities or factors that directly impact revenue and/or earnings, e.g., loan default rates or customer retention.
- Simplify regulatory compliance. Use NLP to analyze and break down regulatory documents, translating complex legal jargon into actionable tasks.
- AI-ify risk management. Leverage ML/AI to refine risk models, incorporating data from diverse sources, and predicting outcomes based on market sentiment, climate data, etc.
- Even more training and upskilling. Introduce advanced AI training and programs, including hands-on projects that simulate real-world financial scenarios, or mentorship programs hosted by AI experts. Offer opportunities for employees to specialize in specific AI domains, such as fraud detection or predictive analytics, tailored to the institution’s needs.
- Plan to scale for the future. Prepare for higher AI demands, assessing the state of the institution’s infrastructure capacity while taking Into account future data processing needs.
- Formalize ethics and bias testing. Develop and implement automated tests to identify biases in AI models, ensuring that models align with ethical standards and fairness criteria. Third-party audits or reviews add credibility to claims of fairness and transparency.
4- Transformative AI Integration
With a strong open-source foundation and a hybrid data platform fully operational, AI becomes deeply ingrained in an institution’s core processes. Robust security mechanisms, such as IAM and RBAC, ensure that only authorized individuals can access sensitive AI models and data.
- Track market trends. Advanced analytics processing vast data volumes to forecast market trends, currencies, stocks, and investment timings.
- Step up to comprehensive cybersecurity. Invest in AI-powered intrusion detection systems (IDS) or security information and event management systems (SIEM). Use these tools to continuously scan transactional data, user activities, system logs, etc., ensuring a rapid response to data breaches, building trust with stakeholders and customers.
- Transform the Customer Experience, Create highly personalized user experiences, using AI to analyze customer behavior—transaction histories, browsing patterns, and service inquiries—to offer personalized financial advice, product recommendations, and tailored alerts, enhancing the user experience and deepening client engagement.
- Process Automation 2.0. Go beyond basic tasks, automating complex processes and workflows. By now, institutions should achieve significant gains in operational efficiency.
- Create integrated feedback mechanisms. Establish iterative loops with stakeholders for AI model refinement. By collecting and analyzing feedback, institutions can incrementally improve their AI systems, ensuring they remain accurate, relevant, and user-friendly.
- Supercharge communication. Regularly communicate AI strategies, milestones, and future goals not just to stakeholders, but to the organization as a whole.
5- Fully Mature AI Integration
At full maturity, financial institutions and insurance companies realize the power of Trusted AI built on top of a hybrid data platform, accelerating AI operationalization, with Trusted AI embedded across all operations.
- Step up to advanced AI oversight. Benchmark against global best practices and ensure that AI ethics are deeply integrated into all AI initiatives, with robust mechanisms for ongoing review, stakeholder feedback, and rapid adaptation to new ethical challenges. Collaborate with external ethical boards to reinforce the commitment to ethical AI.
- Develop next-gen personalized financial products. Leverage AI to design dynamic financial solutions, like AI-optimized savings plans, predictive investment portfolios, and personalized insurance offerings that adjust in real-time to each customer’s financial situation.
- Practice real-time risk management. Use AI to assess risk in real-time, adjusting portfolios and investment strategies automatically based on global events, market fluctuations, etc.
- Automate wealth management. Offer advanced robo-advisory services, using AI solutions to optimize asset allocation, tax strategies, retirement planning, and other practices.
- Anticipate regulatory changes. Tap the power of AI to model the potential impact of regulatory changes, ensuring that you’re one step ahead in compliance.
- Explore cross-industry integration. Use AI to identify opportunities to partner with retail, real estate, health and other industries to develop and market integrated financial solutions.
- Identify opportunities for environmental, social, governance (ESG) initiatives. AI can assist in assessing and investing in sustainable projects, a growing trend in the finance sector.
Cloudera is the ideal hybrid data platform for financial institutions and insurance companies seeking to adopt or advance AI initiatives due to our unique combination of robust data management capabilities and advanced analytics tools. With Cloudera’s proven track record in handling large-scale data infrastructures, Cloudera offers the reliability and security necessary for the sensitive and complex data environments in which financial institutions operate. Cloudera’s ability to seamlessly integrate and process diverse data sources, combined with its comprehensive suite of machine learning and AI tools, empowers institutions to harness the power of generative AI for predictive modeling, risk assessment, fraud detection, and personalized customer experiences. With Cloudera, financial institutions can unlock valuable insights from their data while adhering to strict regulatory standards, ultimately gaining a competitive edge in the rapidly evolving landscape of AI-driven finance.
Find out more about CDP, modern data architectures and AI here.