- 1) In-Context Learning (ICL) – learning from exemplars/instructions in the prompt
- 2) Zero-Shot – prompting without exemplars
- 3) Thought Generation – prompting the LLM to articulate reasoning
- 4) Decomposition – breaking down complex problems
- 5) Ensembling – using multiple prompts and aggregating outputs
- 6) Self-Criticism – having the LLM critique its own outputs
For ICL, it discusses key design decisions like exemplar quantity, ordering, label quality, format, and similarity that critically influence output quality. It also covers ICL techniques like K-Nearest Neighbor exemplar selection.
Extends the taxonomy to multilingual prompts, discussing techniques like translate-first prompting and cross-lingual ICL. It also covers multimodal prompts spanning image, audio, video, segmentation, and 3D modalities.
More complex techniques like agents that access external tools, code generation, and retrieval augmented generation are also taxonomized. Evaluation techniques using LLMs are discussed.
Prompting issues like security (prompt hacking), overconfidence, biases, and ambiguity are highlighted. Two case studies – benchmarking techniques on MMLU and an entrapment detection prompt engineering exercise – are presented.