How Gradient’s Finance LLM, Albatross, Takes Responsible AI Into Consideration

Jan 2, 2024

Gradient Team

With the rise of generative AI in financial services, financial institutions and technology providers must work together to navigate potential challenges and shape what responsible AI should look like to ensure that we are moving forward in the right direction.

With the rise of generative AI in financial services, financial institutions and technology providers must work together to navigate potential challenges and shape what responsible AI should look like to ensure that we are moving forward in the right direction.

With the rise of generative AI in financial services, financial institutions and technology providers must work together to navigate potential challenges and shape what responsible AI should look like to ensure that we are moving forward in the right direction.

Responsible AI in Financial Services

Within financial services, generative AI has been hailed as one of the industry’s most transformative technologies. For professionals in sectors like banking, investments, or insurance, the arrival of gen AI brings an abundance of opportunities and advantages - creating a transformative shift in how businesses are operating today. Yet, alongside this enthusiasm, there’s a palpable level of concern. This caution stems primarily from the sensitive nature of industries like financial services, where this type of technology must be carefully managed to address ethical and data risks.

At Gradient, we believe financial institutions and technology providers must work together to navigate these challenges - starting with the general understanding of what responsible AI should look like. As part of our development process for Albatross, our proprietary state-of-the-art finance LLM, we took into consideration a range of concerns across the industry and anchored our model around core principals that would resonate with both our customers and regulators.

“In highly regulated industries like financial services, it’s our responsibility as technology providers to help shape the conversation around AI in a constructive manner. Collaboration amongst technology providers, regulators, and those within the industry enable mutual learning and collective problem-solving that will reduce friction for the challenges ahead.” - Chris Chang Co-Founder and CEO of Gradient

Logic & Reasoning

In AI, you’ll often hear the term “explainability” which simply refers to the ability to explain how decisions or predictions are made by a model or AI system. This is critical to industries like financial services that rely heavily on supporting data and evidence to help guide key decisions in their day-to-day. However when it comes to gen AI, this can be challenging since LLMs can provide definitive answers that exude an undoubtable amount of confidence, even if the model may not actually know the answer to the question. This is often referred to as “hallucination”.

In our development process for Albatross, we conducted extensive training on top of our model to further embed best practices in logical reasoning to increase accuracy and prevent hallucination.


  • Reflection and Course Correction: Unlike other LLMs that will provide an immediate response to a question, Albatross has been trained to reflect on its answer in order to course-correct its response if needed.

  • Multi-hop Reasoning: Albatross uses multi-hop reasoning, which forces the model to rationalize its decision against multiple answers (e.g 5 passages) before providing a recommendation. However, to create a successful implementation, we needed to up-level the reasoning capabilities even further. In order to take our model beyond generalized knowledge, we inserted a math and code data corpus to achieve state-of-the-art (SOTA) performance against complex tasks.

  • Citation and Traceability: Albatross leverages a variety of optimization techniques, including retrieval augmented generation (RAG) that leverages a highly tuned embeddings model designed specifically for financial services. This enhances the correlation between the data that’s being stored in the vector DB and the queries against the data, especially when it comes to financial context. However one of the most important features that RAG provides is that it enables Albatross to provide proper citation and attribution so that you know which piece of text/where in the text your response came from. This is extremely important when addressing explainability.

Privacy

When it comes to large language models (LLMs), a vast amount of data is typically required when further training a model to learn the necessary skills to support your intended tasks. However whenever data is involved, navigating the realm of data management can be challenging given that data is often proprietary or tied to an individual’s identity.

Even industry expert LLMs like Albatross that are highly trained out-of-the-box can benefit immensely from private data that’s provided by the organization that’s using it. This essentially allows Albatross to not only be a subject matter expert in all things finance, but it enables the model to understand the ins and outs of your organization. Without that data, the model’s overall efficacy may be compromised and won’t be optimized to its full potential. This is no different from taking an investment manager who is working at Company X and shifting them over to Company Y. While the investment manager who is highly trained in all aspects of investing will not lose the skills that they possess, they will innately become less effective due to the intricacies and differences between how the two companies operate. That’s why training an LLM with your organization’s private data will help provide the model with the necessary understanding on how to best perform the intended tasks within your organization.

With Gradient, data privacy is top of mind, which is why financial institutions using Albatross maintain full control over their data. Here’s how we place privacy back into the hands of financial institutions.


  • VPC or On-Premise: Organizations that are using Gradient are able to keep their data in their private environment of choice. Gradient offers dedicated deployments in all major cloud providers and even on-premise. That means that the data will stay with you, every step of the way.

  • Transparency: Models like Albatross are built on state-of-the-art open source models that allow organizations to have full visibility into the ins and outs of the model (e.g. training weights). Unlike closed-source LLMs, you’ll be able to peek under the hood, scrutinize the code, and understand the inner workings of the model that help support explainability and auditability.

  • Ownership: Unlike other platforms where your data may be compromised or used to further train their own models without permission, Gradient provides organizations with full ownership of their data and models in its entirety. This ensures that the data can only be accessed by the financial organization and is only used as intended. This means that even when the model is further trained using an organization’s private data, that data is still under the organization’s governance at all times.

Security

In industries like financial service, security is crucial especially when it comes to the use of generative AI due to the sensitive nature of data and transactions involved. Similar to privacy, financial organizations who are using Gradient will benefit from being able to deploy models like Albatross through dedicated environments such as VPC (any cloud provider of your choosing) or on-premise - each providing enhanced control and visibility over their own data. This offers a robust level of customization, allowing administrators to tailor a specific security policy for their needs since all of the security efforts are essentially done in-house. Because the data is contained within the organization that’s responsible for it and not exposed to another entity, you’ll lower risk dramatically. Last but not least when it comes to security tools, dedicated environments like VPCs will typically include greater levels of authentication, API-enabled protection, additional layers of automation, and the ability to scale effectively.

Moving forward, it’s expected that gen AI will continue to evolve with unforeseen abilities that could significantly impact organizations within financial services. However the more you understand your model and AI system, the more equipped you’ll be to improve the overall security measures that are necessary to protect your organization. We look forward to continuing to work alongside financial institutions, to help address security concerns and ensure data security.

Regulation & Compliance

Financial services is a highly regulated industry, meaning gen AI must comply with existing regulations around data protection, privacy, and financial transactions. This includes international regulations like GDPR, as well as specific financial regulatory frameworks unique to their respective jurisdictions. If AI is not developed and deployed responsibly, this could escalate existing societal challenges. However these challenges won’t be solved by technology providers alone and will essentially require ongoing conversations with regulators to help define these guidelines.

As of today, Gradient has met the necessary requirements across a multitude of standards set across each industry. When it comes to financial services, Gradient has successfully achieved compliance across:


  • SOC-2 Compliant: Gradient’s platform is SOC 2-compliant, which means that it’s been evaluated in terms of information systems relevant to security, availability, processing integrity, confidentiality, and privacy.

  • GDPR Compliant: Gradient has met all requirements for General Data Protection Regulation (GDPR) which is used to regulate how organizations collect, handle, and protect personal data of EU residents.


While we will continue to see new compliance and regulatory standards arise, our team at Gradient will ensure that we continue to meet the highest standards that are set in place and work with regulators to shape the future of AI.

© 2024 Gradient. All rights reserved.

Learn

Company

Get started

© 2024 Gradient. All rights reserved.

© 2024 Gradient. All rights reserved.

© 2024 Gradient. All rights reserved.

Learn

Company