Researchers Reveal: Instability and Limitations of AI Algorithms

2024-01-12

ChatGPT and other machine learning-based solutions are on the rise. However, even the most successful algorithms have limitations. Researchers at the University of Copenhagen have mathematically proven that it is impossible to create consistently stable algorithms for artificial intelligence, except for simple problems. This research, published on the arXiv preprint server, may provide guidance on how to better test algorithms and remind us that machines do not possess human intelligence.


Machine learning algorithms can interpret medical scans, translate foreign languages, and soon drive cars more safely than humans. However, even the best algorithms have weaknesses. A research group at the University of Copenhagen's Department of Computer Science aims to uncover them.


Take the example of autonomous cars reading road signs. If someone puts a sticker on a sign, it wouldn't distract a human driver. But machines may easily be confused because the signs they encounter may differ from the ones they were trained on.


Amir Yehudayoff, the head of the research group, said, "We want algorithms to be stable in a sense that if the input changes slightly, the output will remain almost the same. Real-life involves various noises that humans are accustomed to ignoring, but machines may get confused."


Yehudayoff added, "I want to point out that we haven't directly studied automated car applications. Nevertheless, it seems to be a very complex problem where algorithms cannot remain stable." He further explained that this does not necessarily mean significant consequences for the development of automated cars. "If the algorithm only fails in very few cases, it is likely acceptable. But if it fails in a large number of cases, that is bad news."


This scientific article cannot be industrially applied to identify flaws in algorithms. The professor explained that it is not the purpose. "We are developing a language to discuss the weaknesses of machine learning algorithms. This may lead to the formulation of guidelines for testing algorithms. In the long run, this may result in the development of better and more stable algorithms."


Yehudayoff said, "Some companies may claim to have developed an absolutely secure privacy protection solution. Firstly, our approach may help determine that such a solution cannot be absolutely secure. Secondly, it will be able to identify weaknesses."


However, first and foremost, this scientific article contributes to theory. Particularly, the mathematical content is groundbreaking, he added,


"We intuitively understand that stable algorithms should work as before when exposed to a small amount of input noise. Just like road signs with stickers on them." But as theoretical computer scientists, we need a clear definition. We must be able to describe this problem in the language of mathematics. If we want to ensure that an algorithm is stable, how much noise can it withstand, and how close should the output be to the original output? That is the answer we propose."


This scientific article has generated great interest among colleagues in theoretical computer science, but not yet in the tech industry, at least not now.


Yehudayoff said, "There is always some delay between new theoretical developments and the interest of application personnel." "Some theoretical developments will always be overlooked."


However, he believes this won't be the case. "Machine learning continues to evolve rapidly, and it is important to remember that even highly successful solutions in the real world have limitations. Machines may sometimes appear to think, but they do not possess human intelligence, and that needs to be remembered."