Difference between revisions of "AI tricks"

From GISAXS
Jump to: navigation, search
(Prompt Engineering)
(Prompt Engineering)
Line 4: Line 4:
  
 
==In-Context Learning==
 
==In-Context Learning==
* [https://arxiv.org/abs/2005.14165 Language Models are Few-Shot Learners]
+
* 2020-05: [https://arxiv.org/abs/2005.14165 Language Models are Few-Shot Learners]
 +
* 2025-03: [https://arxiv.org/abs/2503.08030 Learning to Search Effective Example Sequences for In-Context Learning]
  
 
==Chain of Thought (CoT)==
 
==Chain of Thought (CoT)==
Line 16: Line 17:
 
==Retrieval-Augmented Generation (RAG)==
 
==Retrieval-Augmented Generation (RAG)==
 
* [https://arxiv.org/abs/2005.11401 Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks]
 
* [https://arxiv.org/abs/2005.11401 Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks]
 +
 +
==Input/Output Formats==
 +
* 2024-08: [https://arxiv.org/abs/2408.08656 LLMs Are Biased Towards Output Formats! Systematically Evaluating and Mitigating Output Format Bias of LLMs]
 +
* 2024-11: [https://arxiv.org/abs/2411.10541 Does Prompt Formatting Have Any Impact on LLM Performance?]
 +
 +
==Position Bias==
 +
* 2024-07: [https://arxiv.org/abs/2407.01100 Eliminating Position Bias of Language Models: A Mechanistic Approach]
  
 
=Generation=
 
=Generation=
 
* [https://github.com/Zhen-Tan-dmml/LLM4Annotation Large Language Models for Data Annotation and Synthesis: A Survey]
 
* [https://github.com/Zhen-Tan-dmml/LLM4Annotation Large Language Models for Data Annotation and Synthesis: A Survey]

Revision as of 08:07, 8 May 2025