Since the emergence of ChatGPT in fall 2022, prompt engineering has become a widespread practice, with guides and advice proliferating across the internet to optimize interactions with large language models (LLMs) and AI generators. In the commercial realm, businesses are leveraging LLMs to develop product copilots, streamline tasks, and create personalized assistants, reflecting a broad adoption across various sectors.
While prompt engineering has been integral to maximizing LLM capabilities, recent research indicates that the most effective prompt engineering may be executed by the model itself rather than human engineers. This shift raises questions about the future of prompt engineering and suggests that many prompt-engineering roles could be transient, particularly in their current form. As some have noted, the optimal approach to prompt engineering may vary depending on the specific model, dataset, and prompting strategy employed.
New research suggests that prompt engineering is best done by the model itself, and not by a human engineer. This has cast doubt on prompt engineering's future—and increased suspicions that a fair portion of prompt-engineering jobs may be a passing fad, at least as the field is currently imagined...
"Maybe we're calling them prompt engineers today ... But I think the nature of that interaction will just keep on changing as AI models also keep changing."
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.