A Guide to AI Prompt Engineering

  Jeroen Seynhaeve     2025-05-14 A Guide to AI Prompt Engineering

What is AI Prompt Engineering?

Prompt engineering is the practice of designing and refining the inputs given to artificial intelligence models—especially large language models (LLMs) like OpenAI’s ChatGPT and Google’s Gemini—to elicit accurate, useful, and contextually appropriate outputs.

As AI systems become more integrated into everyday tools, prompt engineering has emerged as a critical skill that bridges the gap between human intent and machine understanding. It plays a vital role in maximising the effectiveness of AI across applications such as content generation, customer support, data analysis, education, and software development.

By crafting well-structured prompts, users can guide AI models to perform specific tasks, follow formats, adopt tones, or simulate expert roles, making prompt engineering an essential technique for unlocking the full potential of AI.

AI Prompt Engineering Best Practices

Google has published a guide to crafting effective inputs for large language models (LLMs). It explains how LLMs work by predicting sequential text and how prompt engineering involves refining inputs to elicit accurate outputs. The document details various prompting techniques, including zero-shot, few-shot, system, role, contextual, step-back, chain of thought, self-consistency, tree of thoughts, and ReAct. It also covers configuring LLM outputs by adjusting settings like token limit, temperature, and sampling controls. Finally, the source provides best practices for prompt engineering, such as providing examples, being specific, using instructions over constraints, and experimenting with formats.