Difference between revisions of "AI tricks"

From GISAXS
Jump to: navigation, search
(Prompt Engineering)
(Position Bias)
Line 23: Line 23:
  
 
==Position Bias==
 
==Position Bias==
 +
* 2023-07: [https://arxiv.org/abs/2307.03172 Lost in the Middle: How Language Models Use Long Contexts]
 +
* Test models:
 +
** [https://github.com/gkamradt/LLMTest_NeedleInAHaystack?utm_source=chatgpt.com Needle-in-a-Haystack tests]
 +
** 2023-08: [https://arxiv.org/abs/2308.14508 LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding]
 +
** 2024-02: [https://arxiv.org/abs/2402.13718 ∞Bench: Extending Long Context Evaluation Beyond 100K Tokens]
 +
** 2024-07: [https://arxiv.org/abs/2407.16695 Stress-Testing Long-Context Language Models with Lifelong ICL and Task Haystack]
 
* 2024-07: [https://arxiv.org/abs/2407.01100 Eliminating Position Bias of Language Models: A Mechanistic Approach]
 
* 2024-07: [https://arxiv.org/abs/2407.01100 Eliminating Position Bias of Language Models: A Mechanistic Approach]
  
 
=Generation=
 
=Generation=
 
* [https://github.com/Zhen-Tan-dmml/LLM4Annotation Large Language Models for Data Annotation and Synthesis: A Survey]
 
* [https://github.com/Zhen-Tan-dmml/LLM4Annotation Large Language Models for Data Annotation and Synthesis: A Survey]

Revision as of 08:12, 8 May 2025

Prompt Engineering

In-Context Learning

Chain of Thought (CoT)

"Let's think step-by-step"

Multi-step

Tool-use, feedback, agentic

Retrieval-Augmented Generation (RAG)

Input/Output Formats

Position Bias

Generation