Successfully implementing AI: 4 main challenges

06 / 10 / 2021

Innovations are essential for progress. This is the case in general, but especially in a dynamic industry like digital payments. To meet the ever-growing consumer demands for simple and frictionless experiences and to prevent cybercrime as much as possible, technological advances in fields like Artificial Intelligence (AI) are indeed crucial. Still, there are a number of challenges to deal with before the full benefits of AI can be harnessed.

img ai robot

AI is a major trend within digital payments. AI has already become part of many people’s day-to-day lives: just think about the chatbots which you communicate with on websites or the use of voice recognition when you are calling a company. In addition, at Worldline, we already use AI for fraud detection and for dynamic transaction routing (which adapts iteratively and autonomously based on the feedback that it retrieves from each transaction that is processed). However, I believe the use of AI still has potential to significantly increase in the coming years.

And that brings us to the major issue in this regard: how do we keep processes that are run by AI safe and successful, especially when they involve a lot of value and/or responsibility. There are four very important challenges when you want to use AI, which all connect with each other:

1. Accountability

Once you have AI making decisions on someone’s behalf, the question becomes legally “Who will be responsible for its actions?”. For instance, when a self-driving car causes an accident, who is to blame? The programmer, the manufacturer, or someone else? Naturally, this is also important within finance. Whereas the consequences are manageable when it comes to a low-value purchase, when AI is used to purchase high-value products such as stocks, a house or a car, someone must be accountable. At the moment, guidelines and regulations for this are still evolving.

2. Acceptability

Another very interesting challenge is to what extent AI is accepted. It takes a lot for people to trust AI (for example, current research suggests that autonomous vehicles need to be 100 times safer than those driven by humans for their use to be publicly acceptable). It does not suffice for AI to be as good or as safe as the human alternative; AI needs to be much better than when a person would be involved.

3. Explainability

When a bank declines credit to someone, for example, in many countries the bank must explain why this request has been denied. When an AI algorithm is used in this case, it can be seen as a black box; you feed it some data and on the basis of this it may decline the request. However, this is legally not compliant because you must provide the reasoning behind this rejection. Also, when AI is used in an advisory role, it should be clear how the AI technology arrived at a particular recommendation. Without this explanation, AI would be very difficult to use in critical decision-making situations.

4. Measurability

With traditional programming, you program the system once and, afterwards, you can easily test and measure how it performs. With AI programming, however, programs can be adaptive, making it more difficult to tell whether the algorithm is performing better or worse than anticipated. Therefore, traditional KPIs do not work and measuring the result or performance of an algorithm is much more difficult.

David Daly

David Daly

Worldline Discovery Hub Editor-in-Chief
With over 20 years of experience in tech, David’s passion is how innovative technology solutions can enable new experiences, business models and operational efficiencies. He manages the Worldline Discovery Hub which connects together payments experts from across the group to identify key payment trends, publish thought leadership, deliver innovation workshops with clients and build proofs-of-concept. He has authored two books and is a Fellow of the British Computer Society and Chartered IT Professional.