Recently, a collaborative study conducted by the Norwegian University of Science and Technology, Mizani, and the Idiap Research Institute revealed that the artificial intelligence model GPT-4 demonstrates remarkable accuracy in facial recognition, gender identification, and age estimation, comparable to specialized algorithms, despite not being specifically trained for these tasks.
The researchers conducted extensive tests on GPT-4's biometric recognition capabilities and found its performance to be on par with professional facial recognition algorithms such as MobileFaceNet. This discovery undoubtedly opens up new possibilities for the application of AI technology in the field of biometric identification.
In gender identification tests, GPT-4 achieved a perfect accuracy rate of 100% on a balanced dataset containing 5,400 images, surpassing the DeepFace model, which was specifically designed for this task with an accuracy rate of up to 99%. For age estimation, GPT-4 also showed impressive performance, successfully identifying age ranges with an accuracy of 74.25%. However, the researchers noted that the model tends to provide broader age range estimates for groups aged 60 and above.
However, the study also uncovered a concerning security vulnerability. Researchers found that they could bypass GPT-4's built-in protective measures using simple techniques to access sensitive biometric information. This flaw poses a significant challenge to the security of large language models.
The researchers stated that this finding underscores the urgent need for further security research on large language models. Given the exceptional performance of models like GPT-4 in biometric tasks, ensuring their security is crucial. Additionally, the study's authors warned against relying solely on models like GPT-4 for identification tasks, as they may provide descriptions that appear credible but are actually inaccurate.
It is worth noting that large language models possessing biometric capabilities is not a new phenomenon. For instance, OpenAI had previously acknowledged this and disabled person recognition features in its "Be My Eyes" application designed for visually impaired users for safety reasons.
The novelty of this study lies in the researchers' ability to bypass GPT-4's security protocols with simple techniques, confirming the model's high accuracy in biometric tasks. This finding undoubtedly provides new perspectives for the future development of AI technology and serves as a reminder to remain vigilant about potential security risks while benefiting from AI advancements.