I never wanted to dive or make a jump without Jonge, but when I read some comments from the industry, it seems that this is what is asked to do when they do when it comes to investing in artificial intelligence tools. For example, Ashu Garg and Jaya Gupta in Foundation VC claim: “This is not just a new software category; It is the dismantling of business software as we know it. ”
Knowing the CFO as me, they are a pragmatic group that are not easily influenced by marketing and will only invest when a tangible value can be demonstrated. It is also unlikely to invest in AI if there is any sense that may lose control of critical decision -making processes. So, if suppliers want to encourage CFO to adopt AI, finance leaders must be sure they can trust technology to offer precise results.
Leaving aside the exaggeration, it is not likely that existing large language models (LLM) and conversational tools dismantle each element of financial workflows in the short term. However, change is coming and CFOs must be prepared. At this time, they should be thinking of obtaining the correct bases instead so that when the time comes to adopt AI tools, they have maximum flexibility to do it pragmaticly; Instead of the disconcerting sensation of jumping from a cliff towards the unknown attached to a Bungee cord.
Product and Technology Director, Unit 4.
Ai and the randomness of life
Once the organizational and IT bases are established, the CFOs will have more confidence that the tools of the decisions will base in precise information. They will also be better located to supervise the ai tool to avoid incorrect decisions. For example, unpredictability in prognosis and planning is a main challenge.
Black Swan's events can have dramatic and unforeseen effects on performance, but this is not simple for LLMs to be addressed. Traditionally, they require training on each eventuality to make decisions, but with the correct construction blocks instead, financial teams can decide the best way to address such unique scenarios with AI tools.
A way in which AI agents may address these more complex situations is to collaborate with each other to complete the tasks autonomously, as highlighted by the analyst and commentator of the industry, Phil Wainewright. Potentially, this approach will make these tools find new solutions and create opportunities to boost productivity, as well as commercial performance.
Three priorities to generate trust in AI
In this example, CFOs must be prepared to allow critical financial systems to function autonomously without supervision. This will require great confidence in AI, but financial leaders may have more confidence in the control of AI tools if they have addressed three priorities:
1. Integrity of entry data: It is obvious, but the data must be precise, and its protected integrity so that IA tools make reliable decisions. IA agents must be able to share data to collaborate, so organizations must have a single source of truth for all the information within their systems, as well as to easily integrate information from external sources. This also means being able to read all the data, in all formats, structured and unstructured. In addition to that there is data security and knowing that the data comes from reliable sources: if AI agents are talking to each other without obstacles, how do they guarantee that they are all reliable?
2. Problem complexity: The ai tool that adopts needs to adapt to the problem. The generalist models of AI, such as conversational tools, may not be suitable for making decisions for niche challenges. The way in which the AI is criticized: Does the correct data source have relevant to the problem you are looking for? But the even bigger question is how it deals with randomness. Phil Wainewright talks about the “ingenuity of humans” that today's systems cannot replicate. In the world of finance, if you are looking for forecasts, there is a multiplicity of known factors that affect commercial performance, but there are also black swans to which they are very difficult to train an AI to adapt. How will your AI model deal with randomness?
3. Transparency of decision making: If we are going to let and trust the AI agents to make more decisions in financial environments, then we must be able to trust the answers they provide. Non -supervised learning is a key step on the way to “let go”, but this requires confidence both in the model used and training data. With LLMS, this process can also become inefficient. The more data they require to train the AI, the greatest it becomes the black box, the more difficult to handle and the more difficult it is to understand decision making. It also raises the risk that unreliable data sources are introduced in the model. Companies cannot afford to trust technologies and data that cannot be decoded, so it is essential to find more elegant and simplified ways to demonstrate what data are being used and how the model uses the data to reach decisions.
Addressing these priorities from the beginning will give the CFO the confidence that AI is being adopted as part of a structured approach, surrounded by defined policies and guidelines. Having such checks and balances will guarantee the adoption of AI is not a jump of faith. Certainly, there is an element of entering the unknown, because we still do not know the total reach of what they will be capable of mature technologies of AI, but if you approach well, you will not feel that you are attached to a bungee, cooking the fingers of the feet on the edge of the cliff while you put yourself in psyche.
We have compiled a list of the best RPA software.
This article was produced as part of the Techradarpro Insights Expert Channel, where we present the best and most brilliant minds in the technology industry today. The opinions expressed here are those of the author and are not necessarily those of Techradarpro or Future PLC. If you are interested in contributing, get more information here: