Good morning, everyone!
Today, we’re exploring a timely question: Is prompting a skill we need to master or just a temporary necessity? With the evolution of LLMs, we see a shift in how we interact with these models (i.e., from GPT-3 to GPT-4o and now o1).
Prompting is still here, but its complexity may be short-lived. Let’s explore how the landscape is changing and what it means for those of us who work (or interact) with AI daily.
Are Prompting Techniques Here to Stay?
Prompting, especially the “advanced techniques” you might have heard about — like “prompt chaining,” “few-shot learning,” or “chain-of-thought” — is under some scrutiny. We’ve been vocal about it in past pieces, critiquing the use of overly complex techniques. Why? Because LLMs are evolving. They’re learning to adapt and understand us better without requiring complex, prompt engineering.
💡Prompt Chaining is breaking down a complex task into smaller, sequential prompts to get a more accurate answer. For example, first, ask for a list of ideas, then refine those ideas one step at a time.
As models improve, they require less complex prompting and more natural communication. Advanced techniques will likely phase out as LLMs learn to infer more from basic inputs. OpenAI’s o1 series is already showing this shift in action. With improved reasoning capabilities, the need for complex instructions is decreasing.
Think back to the early days of the internet — there were so many books and courses on “how to use the internet.” It was complex, new, and people needed to learn the ropes. But today, nobody needs a course on how to use Google; it’s intuitive and straightforward.
The same will happen with prompting and LLMs. Right now, learning advanced prompting techniques is helpful, but LLMs will keep adapting to us, just like navigating the Internet became second nature. We will instinctively know how to use them, as we do for Googling.
Does this mean prompting is disappearing? No. But its importance will shift from mastering specific techniques to simply being clear about what you want. Think of it like communicating with a colleague: you don’t need to micromanage, but you still need to communicate clearly. Just like colleagues, LLMs are not in your head (yet).
So, is Prompting a Skill Worth Learning?
Yes and no. Right now, knowing how to prompt effectively can significantly enhance your experience with LLMs. But this skill is evolving.
For example, ChatGPT, with its memory feature (automatically saving relevant information about you), will eventually know your style, preferred sources, and even the projects you’re working on. It’s similar to working with a team member who knows your habits better than you do. While it’s helpful to be good at prompting now, this skill might not be as crucial soon.
The takeaway? Stay informed about how LLMs are developing. Basic skills are useful today, but don’t stress too much about mastering “advanced techniques” — they’re likely to fade away as models better understand us. Just like “chain-of-thought,” debatably the best prompting technique to date, is fading away with o1.
A Question Remains… Do You Need a Prompt Engineer?
There’s been a lot of buzz around the idea of “prompt engineers.” While they can be helpful, especially in specific contexts like building AI-driven apps, they aren’t always necessary. Many developers or hobbyists can handle prompting with just a bit of experimentation.
💡Prompt Engineer: A specialist who crafts detailed and specific instructions for LLMs to get the best possible output. Often required for niche, high-stakes tasks.
The real value isn’t in knowing how to write complex prompts — it’s in knowing what you want to achieve. Having good evaluation metrics and understanding how to measure the success of the AI’s outputs is more important. A “prompt engineer” (or just an experienced user) might help initially, but once the system is up and running, a regular developer can maintain and adjust the system’s performance.
This brings us to the real game-changer: models like OpenAI’s o1. o1 uses test-time computing to refine its responses and can now “reason” about your goal rather than just following step-by-step instructions. Instead of telling the model how to achieve something, you’ll set a goal, and the model will generate the steps for you.
For example, instead of prompting “write an email campaign for a new product launch, then draft a social media strategy, and finally outline the timeline for each phase,” you could simply prompt “create a comprehensive launch plan for this product.” The o1 model would generate the entire roadmap, including email drafts, social media content, and a detailed timeline. This shifts the dynamic: the model starts planning and executing tasks for you, not just responding to your step-by-step instructions.
💡Test-Time Compute: This allows models to perform additional reasoning during response generation, improving output quality by thinking longer and harder about complex tasks.
As LLMs like o1 evolve, we’ll see more of this shift from user-defined steps to model-driven execution. Soon, we won’t need to know how to prompt for every specific task — the model will do most of the heavy lifting.
One final point of clarification: being good at prompting doesn’t make someone an AI expert. There’s a big difference between wrapping an API with a prompt and understanding the underlying technology.
Building a true AI-powered app requires deep knowledge of machine learning, not just prompting skills. In most cases, apps using LLMs aren’t “AI apps”; they’re just applications that call an API to get a language model response. It’s important to differentiate between prompt crafting and actual AI expertise, though AI expertise isn’t required to build a powerful app leveraging LLMs.
In conclusion… (tl;dr)
Prompting isn’t going away, but how we interact with LLMs is about to get much simpler. Advanced prompting techniques are a temporary necessity. Soon, AI will understand you better, and the need for complex prompts will diminish. The key is to stay informed, experiment, and know what you want to achieve.
We’d love to hear your thoughts: Do you agree? Disagree? Are you investing time in learning advanced prompting, or will AI soon make it obsolete? Let us know!
Nice post! What is grandiosely called "prompt engineering" is mostly common sense, and writing clearly, an art acquired through practice with a long series of trials and errors. And with a bit of science, essentially intuitions guided by an understanding of how a generative chatbot works. Not mastering "prompt engineering" should not be an obstacle to using generative AI tools.