In the rapidly evolving landscape of advanced AI technologies, large language models (LLMs) have emerged as potent transformative tools. Their potential to impact industries is vast, from healthcare to education. However, they are especially primed to revolutionize Wholesale Banking & Capital Markets (WBCM). Amid the storm, the monolith stands tall: OpenAI’s GPT, but some in the WBCM sector remain skeptical.
The Achilles’ Heel of GPT
When GPT released their chat platform that brought in over 1 million users within five days of launching, they simultaneously brought the world’s attention to the latest generation of truly massive LLMs but, in an effort to make the barrier to entry extremely small, made the model answer a little too “creatively” which leads to factually incorrect answers. This has, understandably, led to a lot of confusion on what GPT, or any large, general-purpose, generative AI, can do without causing errors and presenting inaccurate information as if it were fact.
The Power of LLMs
To understand the growing interest in LLMs within WBCM, one must first consider the limitations of traditional Natural Language Processing (NLP) in this industry. NLP engines, while adept at understanding and generating human language, often falter when processing context and handling ambiguity – two vital elements in the financial world.
LLMs excel here, offering a nuanced understanding of context, subtleties, and sentiment, making them invaluable in the WBCM sector. Their ability to manage unstructured data, predict market trends, detect fraudulent activities, and provide personalized customer experiences sets them apart from traditional NLP solutions.
Generative AI, which is AI designed specifically to generate information based on context by predicting the next best token (part of a word) in a stream of tokens, was initially thought to only be good at generating text. Even within that narrow use case, some maintain it really is only good at generating a first rough draft of the text that then needs to be edited and fact-checked by a human.
However, a use case that doesn’t get as much attention is the ability to reason via a “Chain of Thought”. First introduced in a paper in early 2022 using GPT-3, the ability to have an LLM reason with itself by generating prompts that bring it closer and closer to the desired output produces striking results that take a model with moderate performance and achieve state-of-the-art accuracy on benchmarks that even surpassed a fine-tuned GPT-31.
GPT, while influential, isn’t the be-all and end-all of LLMs. There are a number of open-source alternatives that come with generic training datasets so that it is possible to construct your own LLM that you can work with similarly to GPT such as LLaMA, LaMDA, or any of the open-source alternatives hosted on HuggingFace. These might be good options to explore if there are data privacy concerns utilizing GPT’s API, or if the number of API calls grows too much it would be more cost-effective to utilize your own model instance.
There are also other LLMs that operate a bit differently and are useful for different problems. BERT, for example, offers a bi-directional understanding of text, providing richer context comprehension compared to GPT’s unidirectional processing. This makes it generally better at tasks such as sentiment analysis and named entity recognition. BERT’s maturity, having been available since 2018, also means a number of methods for explainability have been developed that have yet to be developed for GPT and GPT-like models due to their differing technical approaches.
Learn more about we deliver transformation in Wholesale Capital Markets combining AI & Machine Learning with Business Process Excellence.
The Emergence of LangChain
Developed by the open-source community, LangChain is one of the most advanced Python libraries for interacting with the new generation of LLMs. It takes the idea of “Chain of Thought” prompting and pulls it together in an easy-to-use yet customizable library that handles a large portion of the work with the API doing that for you. In addition, it also brings other LLM architectural patterns that have recently emerged, such as embedding documents for cosine similarity searching so that you can pull in the additional context needed by the model to answer a question. This is a major advantage over a model such as BERT where the only viable option for similar output is an extensive fine-tuning process made even more time-consuming by the likely necessity of labeling custom training data. Since these models are general purpose and excel at summarization, not only are they able to take context data and synthesize a correct answer from the data, but they even outperform other models fine-tuned for that task.
For example, FinBERT is a model specifically tuned on financial services data to improve how it handles and understands financial jargon. While it was only trained on roughly 5000 additional sentences and needs far more work to make it generally viable, it still proves the concept. However, a general-purpose generative model with an embedded knowledge base and LangChain managing the queries will outperform it on a broader application of data in finance. To make FinBERT operate at the same efficacy, it would likely take tens of thousands of additional manually created and curated training data.
Looking to the Future: Autonomous Agents
Another new paradigm that is in its nascent stages but nevertheless showing incredible promise and a large backing from the open-source community is the ability to have an LLM create a plan of how to accomplish a goal and then spin up additional instances (or agents) and have them complete narrow tasks that contribute to the whole. Libraries such as Auto-GPT help to formalize this paradigm and integrations with popular software mean agents can take autonomous action that is reasoned and documented.
The Right LLM: A Matter of Value
Choosing the right LLM is about understanding the nuanced needs of the WBCM sector and deploying the right AI tools to address them. The real value lies less in the fine-tuning of the LLM and more in building a rich knowledge base to use with it. Using embeddings and similarity search to provide context for the LLM could yield better results than laborious fine-tuning.
The ultimate answer to the industry’s problems may lie in a solution that skillfully blends the best of NLP and LLM technologies, thus harnessing their respective strengths. In a sector where the stakes are high and the margin for error slim, the introduction of LLMs and novel approaches like LangChain signifies that the field is far from being monopolized.
There’s always room for innovative approaches and solutions. Decision-makers in the Wholesale Banking & Capital Markets sector are urged to embrace the potential of LLMs, remain open to GPT and its alternatives, and understand that the goal is not just to deploy AI but to create value with it. As the AI landscape continues to evolve, the question isn’t about replacing one technology with another, but about building a future that leverages the strengths of each, ensuring the sector continues to thrive in the age of AI.