Difference between revisions of "AI tricks"
KevinYager (talk | contribs) (Created page with "=Prompt Engineering= ==In-Context Learning== * [https://arxiv.org/abs/2005.14165 Language Models are Few-Shot Learners] ==Chain of Thought (CoT)== "Let's think step-by-step"...") |
KevinYager (talk | contribs) (→Prompt Engineering) |
||
(One intermediate revision by the same user not shown) | |||
Line 1: | Line 1: | ||
=Prompt Engineering= | =Prompt Engineering= | ||
+ | * [https://arxiv.org/abs/2406.06608 The Prompt Report: A Systematic Survey of Prompting Techniques] | ||
+ | |||
==In-Context Learning== | ==In-Context Learning== | ||
* [https://arxiv.org/abs/2005.14165 Language Models are Few-Shot Learners] | * [https://arxiv.org/abs/2005.14165 Language Models are Few-Shot Learners] | ||
Line 13: | Line 15: | ||
==Retrieval-Augmented Generation (RAG)== | ==Retrieval-Augmented Generation (RAG)== | ||
* [https://arxiv.org/abs/2005.11401 Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks] | * [https://arxiv.org/abs/2005.11401 Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks] | ||
+ | |||
+ | =Generation= | ||
+ | * [https://github.com/Zhen-Tan-dmml/LLM4Annotation Large Language Models for Data Annotation and Synthesis: A Survey] |
Latest revision as of 09:47, 6 January 2025
Contents
Prompt Engineering
In-Context Learning
Chain of Thought (CoT)
"Let's think step-by-step"
Multi-step
Tool-use, feedback, agentic