Generative AI: Are you ready to start reimagining the future of financial services?
18 / 09 / 2023
Companies around the world are embracing Generative AI to drive transformational change within their organisations. Across financial institutions, we see a strong interest in leveraging this innovative technology. In this article, we interview two experts, Valentine Horstmann (Product Manager for Worldline Conversational Platform) and Charlotte Pasqual (R&D Project Manager at Worldline Labs), to explore the benefits of Generative AI and the challenges that need to be addressed. Charlotte provides insights into the ways in which AI is revolutionising the payment and banking industry, while Valentine focuses on its impact on customer service and business operations.
How would you describe the impact of Generative AI in the field of AI? Is it an evolution or a transformative shift?
Charlotte Pasqual: “I think we are witnessing an unprecedented acceleration. In fact, the revolution took place some years ago with the concept of ‘attention’ and the new architecture of deep neural networks called ‘transformers’. The Large Language Models (LLMs) are now the latest generation of this kind of model, and we will probably see another one that is even more powerful in the coming months or years.
The success of ChatGPT comes first from its very intuitive and accessible interface, which proved the almost unlimited potential of Generative AI. Within a few days, everyone could become an expert in conversing with AI to get the best generated answers on a vast range of topics, and all of this for free – as user questions were being used to create a new dataset.
And if we consider the other promising Generative AI models – those which can create all sorts of contents from text prompts (images, videos, voice, and others), we can see there is a revolution, with the emergence of a large range of new tools and usages, that are still to be explored.”
What are the potential opportunities and possibilities that Generative AI can unlock?
Valentine Horstmann: “The advantage of AI is that it can really help us in many ways. The idea is to identify friction points, and that's where task automation can bring more fluidity to processes and improve service quality.
On the business side, we're talking about knowledge, rationalisation, expertise, and efficiency. Access to accurate data has become ultra-efficient. There will be fewer mistakes, higher quality in writing, and better follow-up. AI tools will be able to strengthen the anticipation of customer behavior and, for example, push the right offer to them at the right time.
On the customer side, it's all about hyper-personalisation. Exchanges, regardless of the channel, will be personalised, taking into account context, history, and emotions. NLP (Natural Language Processing) capabilities are not the only ones to evolve drastically thanks to AI. Speech synthesis is also on this trend. Today, we can generate voices that are so close to a human voice that they are practically indistinguishable. Voice assistants will become more natural and pleasant to listen to. This will make it easier to get the right answer immediately, and problem-solving will be both faster and of higher quality. Customer satisfaction should increase with self-service tools.”
Looking at the banking industry, how do you see AI influencing it, and how can banks leverage Generative AI to their advantage?
Charlotte Pasqual: “Currently, with its rapidly changing evolution, we see new models, new actors, and new progress every week if not every day, which makes the current market very unstable, but also very promising, as everyone wants AI and LLMs in their products and services!
For banks, we foresee several kinds of impacts. First, on the customer side, with more automation and accessibility. Then, on the professional user side, a new set of assistants should appear, able to perform repetitive tasks, reduce human mistakes, and go faster. But that’s something we were already forecasting with the earlier generation of assistants. What’s different now, is that those tools will be able to do more cognitive tasks too: as they have the potential to review data and synthetise it at a significant rate and with a better comprehensiveness, they could advise you on investment processes or business models for instance; they could analyse companies’ performance, realise some competitive intelligence, fill comparison tables, support marketing campaigns, and even reinforce KYC/AML processes or claim management… It is maybe one of the first times that machines could be in a position to partially replace intellectual work, and people with these type of jobs and skills will have to adjust to this new reality.
Generative AI is not without some other paradoxes though: we could imagine that their exceptional creativity, for instance, would help us to find new perspectives and foster innovative ideas, which could benefit many areas, for example, to address sustainability challenges. And, at the same time, their need for resources and their energy consumption is a serious obstacle, as experience has shown that sometimes, simpler tools are just as good in well-defined business use cases but have much lower energy consumption.”
AI has already started transforming customer service, but what specific changes can we expect in the near future?
Valentine Horstmann: “Generative AI will allow us to optimiSe the time-to-market of solutions, in terms of the responses provided to customers. Creating prompts instead of fixed answers is progress for both administrators and customers, who will receive a much more contextualiSed response.
I see three essential challenges for solutions that create customer-business interactions:
- Ensuring that the solution integrates anonymiSation and security of the data used by the LLMs
- Framing conversations with prompts enriched by examples, called ‘few-shot learning’, to limit hallucinations and biases in the generated responses Fine-tuning the models, if possible, can also be an excellent solution, but more expensive.
- Monitoring interactions to ensure relevance, safety, and the desired responses. It is essential to have access to exchanges to gauge both performance and customer feedback. The idea is to maintain control, improve models or prompts, and identify low-quality feedback through customer satisfaction.
If these three points are respected, you'll see a significant acceleration in providing human-like answers quickly and accurately whilst will respecting safety standards. Customer service would save time on processing responses, and the role of advisors would evolve into an expert role for complex resolutions. Advisors would use AI systems for support and play a crucial role in improving the AI performance.”
How do you think the use of AI will improve customer relationships? Are there any specific examples that come to mind illustrating this improvement?
Valentine Horstmann: “Automation plays a vital role in enhancing customer relationships by enabling self-service, providing immediate solutions, and increasing satisfaction across all channels and levels. Numerous features will be introduced and made available to both advisors and customers in the coming months.
The use of self-service will be enhanced thanks to the generative capability of providing better answers and being a true assistant, rather than just a simple chatbot or callbot.
For advisors, this means more value and precision in their responses, as well as an evolution in their role. As the true voice of the company, they will also be the users of AI systems. They will use assistants on multiple levels, with almost limitless possibilities. The speed of data processing is at the heart of the challenge, but it will provide essential transparency in exchanges: sentiment analysis, categorisation, prediction, response assistance, hyper-personalisation, automatic summarisation, and more. Each action can be treated with a multitude of details that allow for accuracy from the first contact.
These opportunities are still underutilised because they require a lot of business finesse in deployment, but that will change in the coming months! And advisors will be at the forefront of testing and enabling this significant advancement in the field of customer relationships.”
What is for you the impact of AI on the payment industry?
Charlotte Pasqual: “As Valentine explained, the first applications of Generative AI today are around assistants and automated customer service, and this will change payment journeys as well. Generative AI can be used to improve the accessibility of digital services, thanks to innovative features like comparisons of the available payment options or order summarization using customised vocabulary at the right level of detail. We can already reach this kind of service, with some limitations – in particular, the hallucinations of the current models sometimes require complementary verification that the generated content is correct.
Generative AI creates new challenges for the core of payment services: security and the fight against fraud. Since the buzz around LLMs started, fraudsters have been using them for social engineering attacks at scale. It is therefore essential to implement advanced biometric authentication methods to strengthen security measures and protect payment transactions against unauthorised access and fraudulent activity.
The last kind of impact we can notice relates to digital services in general, and software development efficiency. The full AI-generation of software source code is not ready yet, but it is already a powerful help for skilled developers for analyzing legacy code, and providing them some suggestions, or smart autocompletion to obtain a quicker result.”
What are the key concerns associated with Generative AI technology?
Valentine Horstmann: “Generative AI is quite amazing, as it allows us to create content regardless of the brief. Of course, there are limits. In particular, societal, ethical, and security concerns. The EU AI Act has not been signed yet, but it is expected to be in 2023. It will provide the first legal framework for AI usage in Europe. This should bring everyone up to date on data security, traceability, environmental concerns, ethics, and high-risk use cases that will be listed.
Major risks include poisoning training data, which can lead Generative AI to produce biased, inappropriate, or malicious responses. Providers are directly affected and must protect themselves against these types of attacks.
As for AI uses, numerous questions will continue to arise about intellectual property and responsibility. For instance, when AI generates content, the question arises: who owns the creations - the one who writes the prompt or the technology that makes it possible? Or if AI makes a decision for me that I do not endorse, who is responsible? The range of use cases is so broad that we must remain vigilant to avoid biases and other abuses that can stem directly from AI.
Moreover the development of new models is booming, but this raises other questions about performance, environmental impact, and responsibility. Regarding performance: do I need so much data to generate content related to a business vertical? Is the data secure enough? The power consumption of these models is also incredibly high for data centers; there is huge room for improvement in optimising model power consumption.“
As with any innovative technology, AI implementation presents risks and challenges. From your perspective, what are the main challenges we should recognize?
Charlotte Pasqual: “The first question we must ask ourselves is: who is developing these models and what datasets are they using? A Stanford University study [1] showed that, in 2022, most of the new Generative AI engines have been released by industrial players, and that the major actors of academic research cannot follow the current timeline of publications. A significant part of these engines came from the American market as well, and we count only a few of European initiatives.
This raises concerns about the world view that is supported by these models, and about the cognitive biases that are associated with them. OpenAI explained that they used a principle of ‘reinforcement learning from human feedback’ to reduce this issue and make their chatbot’s answers less toxic but we don’t really have information about the rules the human experts actually applied. There are some exceptions: the European engine called Bloom, for example, for which we have the details in their scientific publication on how they took care of some specific biases in the dataset they used.
We already mentioned the impacts from the point of view of sustainability, and the few available estimations would make you dizzy – 190 000 kWh were needed to train the GPT-3 model, for instance [2]. And this is for the training step, but what about the inference? Even if until now we thought that it was quite insignificant compared to training, with the rising number of users, it could now reach astronomical heights. Therefore, these are technical challenges to be addressed in the coming months.
The last risks I would like to highlight here affect the users of such interfaces, because they are based on the most natural medium of communication: the human language. Research has shown that we have a natural tendency to personalise conversational bots, and knowing the text quality of Generative AI systems, the risk of us underestimating their influence on our judgements or actions is very real and could lead to nudges and manipulations, as the researcher Laurence Devillers already explained a few years back. Thus, we must inform our customers when they are exchanging with an automatic conversational bot so they can be mindful of that.”
Conclusion
The ability to adapt quickly and leverage the advancements in Generative AI is becoming a key success factor for staying competitive in a constantly evolving environment. The realm of possibilities is vast, and we are at the beginning of a new era where artificial intelligence and emerging technologies will radically transform how we interact, communicate, and work together.
References
[1] The AI Index 2023 Annual Report, "AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2023".
[2] Worldline Labs is already in green AI - The Worldline tech Blog.
Charlotte Pasqual
Valentine Horstmann
Related articles
-
A complete solution for the protection of your devices | Brochure
Learn more -
How BNP Paribas is cracking down on direct debit fraud.
-
Will digital identity be used by everyone?
-
Worldline partners with BOCHK to launch open platform card solution in Hong Kong
Learn more -
Why Thailand could be your next e-commerce growth market