Difference between revisions of "AI tricks"
KevinYager (talk | contribs) (→Prompt Engineering) |
KevinYager (talk | contribs) (→Prompt Engineering) |
||
Line 4: | Line 4: | ||
==In-Context Learning== | ==In-Context Learning== | ||
− | * [https://arxiv.org/abs/2005.14165 Language Models are Few-Shot Learners] | + | * 2020-05: [https://arxiv.org/abs/2005.14165 Language Models are Few-Shot Learners] |
+ | * 2025-03: [https://arxiv.org/abs/2503.08030 Learning to Search Effective Example Sequences for In-Context Learning] | ||
==Chain of Thought (CoT)== | ==Chain of Thought (CoT)== | ||
Line 16: | Line 17: | ||
==Retrieval-Augmented Generation (RAG)== | ==Retrieval-Augmented Generation (RAG)== | ||
* [https://arxiv.org/abs/2005.11401 Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks] | * [https://arxiv.org/abs/2005.11401 Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks] | ||
+ | |||
+ | ==Input/Output Formats== | ||
+ | * 2024-08: [https://arxiv.org/abs/2408.08656 LLMs Are Biased Towards Output Formats! Systematically Evaluating and Mitigating Output Format Bias of LLMs] | ||
+ | * 2024-11: [https://arxiv.org/abs/2411.10541 Does Prompt Formatting Have Any Impact on LLM Performance?] | ||
+ | |||
+ | ==Position Bias== | ||
+ | * 2024-07: [https://arxiv.org/abs/2407.01100 Eliminating Position Bias of Language Models: A Mechanistic Approach] | ||
=Generation= | =Generation= | ||
* [https://github.com/Zhen-Tan-dmml/LLM4Annotation Large Language Models for Data Annotation and Synthesis: A Survey] | * [https://github.com/Zhen-Tan-dmml/LLM4Annotation Large Language Models for Data Annotation and Synthesis: A Survey] |
Revision as of 08:07, 8 May 2025
Contents
Prompt Engineering
- 2025-03: Prompting Science Report 1: Prompt Engineering is Complicated and Contingent
- 2024-06: The Prompt Report: A Systematic Survey of Prompting Techniques
In-Context Learning
- 2020-05: Language Models are Few-Shot Learners
- 2025-03: Learning to Search Effective Example Sequences for In-Context Learning
Chain of Thought (CoT)
"Let's think step-by-step"
Multi-step
Tool-use, feedback, agentic
Retrieval-Augmented Generation (RAG)
Input/Output Formats
- 2024-08: LLMs Are Biased Towards Output Formats! Systematically Evaluating and Mitigating Output Format Bias of LLMs
- 2024-11: Does Prompt Formatting Have Any Impact on LLM Performance?