Is AI prompt engineering already dead?

Brainlabs has partnered with Michael Taylor, Author of Prompt Engineering for Generative AI and a leading voice in the world of digital marketing and AI for this guest post. With extensive experience in leveraging technology for business growth, read on for Michael’s unique perspective on the evolving landscape of prompt engineering and its implications for the future of work. 

The meteoric rise of ChatGPT to 100 million users in just 4 months sparked immense interest in the field of prompt engineering, which is the practice of finding clever ways to phrase queries to large language models (LLMs) to get the best results. Companies like Anthropic made waves offering upwards of $375k plus benefits when advertising a prompt engineer job, and Alibaba co-founder Robin Li, has boldly declared that “in ten years, half of the world’s jobs will be in prompt engineering.”


However, recent developments are already casting doubt on the long-term prospects for prompt engineering as a career. OpenAI CEO Sam Altman believes that “we won’t be doing prompt engineering in 5 years” as interfaces improve. Researchers associated with the prompt optimization library DSPy have declared “AI prompt engineering is dead”, demonstrating that AI itself can actually generate more effective prompts than humans can. Does this spell the end for prompt engineering as a career field before it has even gotten started?

Prompt engineering is a skill not a job

I call myself a prompt engineer on LinkedIn, and I published a prompt engineering book through O’Reilly, so I’m extremely bullish on this trend. However, I don’t think ‘prompt engineer’ will be a common job role. Rather, it will be a skill you need to possess in order to do many jobs effectively, just like being proficient in using  Microsoft Excel, but it would be weird to call yourself an ‘Excel engineer’. 

Already I am seeing the term ‘prompt engineer’ level out after an initial spike, as the momentum carries itself into a broader ‘AI engineer’ role, which includes fine-tuning custom models, implementing logging and evaluation metrics, and building the infrastructure needed to provide LLMs with the right context.

Source: Google Trends

I was prompt engineering back in 2020 with the GPT-3 beta, and a lot of the tips, tricks and hacks needed to get the model to output usable results were suddenly no longer needed when GPT-4 came out. This reminds me of the early days of SEO, when you used to be able to  employ  black hat techniques like stuffing the same keyword multiple times on the page – this eventually stopped working as Google got better at figuring out what ‘good’ content was! I predict we’ll see the same timeline with ChatGPT, but accelerated.

The rise of frameworks like DSPy allow AI engineers to operate at an even higher level of abstraction, defining the inputs and outputs, as well as the evaluation criteria, and directing AI to carry out the hard work of applying prompt engineering tactics to optimize performance. A recent study shows that DSPy achieves 50% better performance scores and saves over 20 hours versus a human prompt engineer (when comparing the F1 score of 10-Shot AutoDiCoT versus DSPy Default). However, the best scores came from the combination of DSPy and small modifications by the human prompt engineer.


Before getting into the AI space, I co-founded a growth marketing agency called Ladder in 2014, back when digital was a rounding error versus TV ad budgets, and most marketers were creative, rather than technical or data-driven. Fast forward to today, and the majority of advertising is now digital. The title ‘growth hacker’ never became widespread, but the average marketer can build a landing page, and knows enough statistics to conclude an A/B test. So it will go with AI – over time the term ‘prompt engineer’ will fade away, as all marketers will be expected to be able to work effectively with AI.

Source: The Economist

Humans need prompting too

It’s tempting to think that prompt engineering will become obsolete as AI systems become more advanced, perhaps even reaching superintelligence. However, even the most intelligent humans in any organization still require a form of “prompting” to perform effectively.

Consider the smartest employee in your company. No matter how brilliant or capable they are, they still need direction and guidance from management to align their efforts with business objectives. They need input from HR to understand company policies and expectations. They require feedback from legal to ensure they’re operating within regulatory boundaries. And they benefit from regular performance evaluations to help them grow and improve.


In many ways, this ongoing “prompting” of human employees is not so different from the prompting we give to AI systems today. It’s all about providing the right inputs, context, and feedback to help the intelligence – whether artificial or biological – perform in line with the organization’s goals.

As AI systems grow more sophisticated, the nature of prompting may change, but the fundamental need to guide and direct their efforts will remain. In this sense, working with AI may come to resemble the management of human employees more than traditional computer programming. The most successful organizations will likely be those that can effectively “prompt” both their human and AI resources, aligning them to work together towards common objectives.

One day, most of your coworkers will be AI

As language models and other AI systems grow more advanced, they are increasingly able to simulate human-like intelligence and capabilities. In effect, they are becoming more and more like virtual employees or coworkers. This trend is likely to continue and accelerate in the coming years, to the point where most of your messages in Slack are with AIs.


As AI coworkers become prevalent, the skills needed to work effectively with them will start to converge with the skills needed to manage human employees. Many of the best practices for managing AI – providing clear direction, setting expectations, offering feedback, aligning efforts with business goals – are the same practices that have worked for decades in managing people.

In this future, AI management best practices may fit more naturally within the domain of MBA programs than computer science departments. The day-to-day work of interacting with AI will increasingly resemble the work of a mid-level manager overseeing a team. It will involve coordinating the efforts of human domain experts and AI agents, ensuring smooth collaboration and alignment towards common objectives.

Business leaders and professionals across all industries should start preparing for this future now by embracing AI as a tool to augment and collaborate with human intelligence. The most successful individuals and organizations will be those that can effectively combine the unique strengths of human and artificial intelligence – the creativity, empathy and judgment of humans with the speed, scale and analytical power of AI.

Ultimately, working with AI will be less about technical skills like coding or data science, and more about the timeless skills of effective leadership, communication, and collaboration. By starting to cultivate these skills in the context of AI now, businesses and individuals can position themselves for success in a future where artificial intelligence is a ubiquitous part of the workforce.

To keep up with Michael’s latest thoughts on AI and more, you can follow him on LinkedIn and Twitter. You can purchase his book, Prompt Engineering for Generative AI: Future-Proof Inputs for Reliable AI Outputs here.