"AI Technology Revolutionizing the Role of Prompt Engineers: Job Displacement in Progress"

2024-03-13

How many times have we heard that prompting is the job of the future? Since the launch of ChatGPT, everyone has been trying to prompt the chatbot in different ways and calling it a job. Some people are worried that they will lose their jobs due to the emergence of this new role. However, it is now evident that AI can do this job better than humans.


"Who would really believe that 'prompting in ChatGPT' would be a high-paying full-time job?" a user asked on HackerNews.


In a recent study, researchers at VMware found that large language models become more unpredictable when humans start trying strange prompts. Interestingly, another research team found that "humans should no longer manually optimize prompts." The best prompt engineering is done by AI models themselves.


Many problems with manual prompting


VMware's Rick Battle and Teja Gollapudi experimented with large and small language models and tried different prompting techniques to find the optimal and most effective methods. "Surprisingly and frustratingly, small modifications to prompts cause such large fluctuations in performance," the conclusion of the paper stated.


They also emphasized that there is no clear way to improve performance, and the effects are minimal. For example, the study concluded that practitioners don't even need to use models of the scale of GPT-4 or PaLM-2 to achieve effective prompts. In their experiments, Llama 13B and Mistral-7B were able to generate "more sophisticated prompts," which surprised them.


"It is undeniable that autogenerated prompts perform better and are more general than 'actively thinking' prompts manually adjusted," the paper concluded, stating that even when positive affirmations are inputted to the chatbot, the performance of auto-prompts (also known as completions) is better.


"I can't believe some of the things it generates," Battle said in an interview, referring to prompts generated by the system itself, which no human could come up with because it's too strange.


Biases in machine learning systems may come from biased training data. But that's just one of many sources of bias. It can be said that prompt engineering may be an even worse source of bias.


Another aspect of prompt engineering is that it introduces biases into the model's output, and observers may attribute these biases to the AI model itself. "It can be said that prompt engineering may be an even worse source of bias," said François Chollet.


People confuse prompt engineering with simply inputting prompts in English to chatbots, but the model actually performs a lot of mathematical computations, and English is just the frontend of the model. Therefore, AI models can do better.


This makes perfect sense. "The demand for prompt engineering in large language models is a sign of a lack of robust language understanding," wrote Melanie Mitchell in a post on X last year. Even after a year of language model development and expansion, it seems to still be the case.


Is prompt engineering just a passing trend?


Just as C++ is considered a "dying language," prompt engineering is also seen as a passing trend. Logging into ChatGPT or Codex and directly inputting what you want is not as easy as it seems. It's a skill that needs to be learned. Now that AI does it better than humans, the disappearance of prompt engineering is only a matter of time. But will it really disappear?


Most prompt engineering is just a trial-and-error process, and now with AI, there's no need to do that anymore. Now, as companies start hiring positions like LLMOps, they may be labeled as new prompt engineers, but as a profession, it won't disappear.


Douglas Crockford, one of the developers of JavaScript and the creator of the JSON format, expressed concerns about English as a programming language "because it's too ambiguous."


He explained that the fundamental rule of programming is that a program must be perfect. "It must be perfect in every aspect, every detail, every state, and every time." He further stated that if it's not perfect, the computer can do the worst things at the worst time. "It's not the computer's fault, it's the programmer's fault," he pointed out.


Another important aspect of prompt engineering is that it is constantly changing as people try different data sources and different AI models. As companies adopt different open-source and closed-source models, there may still be a need for someone to find the best prompt combinations for each model.


We now call them prompt engineers, but as AI advances, we may use other names to refer to them. This is not to say that prompt engineering has no value. It has found its place and may exist for a while, but not for too long.