Robotic process automation (RPA) is undoubtedly the hottest topic in discussions about the architecture and functionality of modern financial back-office support systems, including post-trade operations.
What started as efforts to improve productivity of various, isolated labor-intensive data processing operations has quickly become an integral component of digital transformation strategies for financial institutions around the world. Although RPA does look like the latest technological “silver bullet,” its implementation could lead to a catastrophic failure of business operations if the RPA’s logical structure was not designed and tested properly.
Specifically, if the existing manual processes and their IT infrastructure are “blindly” copied into the RPA architecture, it could lead to reducing, and not increasing the overall productivity of the business operations. Why is that? Well, humans and machines are drastically different in ways they comprehend information and process data.
Biases and heuristics that humans typically find useful, could create significant perils for the modeling of processes that we would like to automate.
Whether we use software robots or systems based on artificial intelligence (AI), the greatest challenge in building the effective RPA is to re-design the subject of automation from the machine perspective. To stay away from the know-hows developed by human operators and to re-think all the familiar business processes from the computers’ point of view are the most difficult challenges that we — designers of AI systems — face every day.
Think about it: If we blindly copied the way birds use their wings to fly, we would never have created the airplanes that can carry hundreds of passengers with the speed that no bird could ever achieve.
The key difference between humans and computers is that humans can easily learn new concepts. We are so used to this natural ability that we don’t even notice the changes in our interpretation of data (the “a ha” moments) that must be explicitly “explained” to a computer system that is running RPA. What we take for granted, computers must be programmed to do.
Computers, on the other hand, have a notable capability of performing multiple tasks at the same time. This is really one of the things that humans find difficult to do. Although we, humans, are capable of multitasking as well, we only do it to the extent of autonomic or involuntary (non-cognitive) tasks, such as breathing and the beating of the heart.
This limitation is constantly finding its way into the processes that we design. Because our thinking is very sequential (linear in nature) our operational procedures tend to follow the same path. This is not very useful if we want to build an efficient RPA system. Re-designing the automation subject processes to take advantage of AI’s (or robotic systems’) multitasking abilities is one of the major keys to successful RPA.
Traditional automation systems including the first incarnations of RPAs were typically built by the collaboration between the subject experts (people who run existing operations) and the IT personnel capable of designing the software and algorithms that mimic the currently running business processes.
The next generation of RPA — called Intelligent RPA — could only be successfully built if the AI specialists who have in-depth knowledge of machine learning techniques and understand the way computers “think” are involved in the RPA design and its implementation right from the beginning.
(Editor’s note: Dr. Alex Bogdan is the Chief Scientific Officer at Castle Ridge Asset Management. Dr. Bogdan is an accomplished quantitative analyst with a wide range of interdisciplinary knowledge. He has more than 20 years of experience in the financial services industry and has developed multi-strategy portfolio management methodologies, quantitative models, automated trading systems, and investment algorithms. At SecOps North America 2018 in Toronto, June 6-7, Dr. Bogdan will be a panelist for the “Crossing the Robotics Rubicon” session.)
Need a Reprint?