Difference between revisions of "AI tricks"

From GISAXS
Jump to: navigation, search
(Position Bias)
(Prompt Engineering)
 
(2 intermediate revisions by the same user not shown)
Line 2: Line 2:
 
* 2025-03: [https://arxiv.org/abs/2503.04818 Prompting Science Report 1: Prompt Engineering is Complicated and Contingent]
 
* 2025-03: [https://arxiv.org/abs/2503.04818 Prompting Science Report 1: Prompt Engineering is Complicated and Contingent]
 
* 2024-06: [https://arxiv.org/abs/2406.06608 The Prompt Report: A Systematic Survey of Prompting Techniques]
 
* 2024-06: [https://arxiv.org/abs/2406.06608 The Prompt Report: A Systematic Survey of Prompting Techniques]
 +
* 2025-06: [https://arxiv.org/abs/2506.05614 Which Prompting Technique Should I Use? An Empirical Investigation of Prompting Techniques for Software Engineering Tasks]
  
 
==In-Context Learning==
 
==In-Context Learning==
Line 8: Line 9:
  
 
==Chain of Thought (CoT)==
 
==Chain of Thought (CoT)==
"Let's think step-by-step"
+
* 2022-05: [https://arxiv.org/abs/2205.11916 Large Language Models are Zero-Shot Reasoners] "Let's think step-by-step"
* [https://arxiv.org/abs/2406.07496 TextGrad: Automatic "Differentiation" via Text]
+
* 2024-06: [https://arxiv.org/abs/2406.07496 TextGrad: Automatic "Differentiation" via Text]
 +
* 2025-06: [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5285532 Prompting Science Report 2: The Decreasing Value of Chain of Thought in Prompting]
  
 
==Multi-step==
 
==Multi-step==
Line 26: Line 28:
 
* 2024-11: [https://arxiv.org/abs/2411.01101 Self-Consistency Falls Short! The Adverse Effects of Positional Bias on Long-Context Problems]
 
* 2024-11: [https://arxiv.org/abs/2411.01101 Self-Consistency Falls Short! The Adverse Effects of Positional Bias on Long-Context Problems]
 
* 2025-02: [https://arxiv.org/abs/2502.01951 On the Emergence of Position Bias in Transformers]
 
* 2025-02: [https://arxiv.org/abs/2502.01951 On the Emergence of Position Bias in Transformers]
* ''Testing models:''
+
* '''Testing models:'''
 
** [https://github.com/gkamradt/LLMTest_NeedleInAHaystack?utm_source=chatgpt.com Needle-in-a-Haystack tests]
 
** [https://github.com/gkamradt/LLMTest_NeedleInAHaystack?utm_source=chatgpt.com Needle-in-a-Haystack tests]
 
** 2023-08: [https://arxiv.org/abs/2308.14508 LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding]
 
** 2023-08: [https://arxiv.org/abs/2308.14508 LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding]
Line 33: Line 35:
 
** 2024-07: [https://arxiv.org/abs/2407.16695 Stress-Testing Long-Context Language Models with Lifelong ICL and Task Haystack]
 
** 2024-07: [https://arxiv.org/abs/2407.16695 Stress-Testing Long-Context Language Models with Lifelong ICL and Task Haystack]
 
** 2025-04: [https://arxiv.org/abs/2504.04150 Reasoning on Multiple Needles In A Haystack]
 
** 2025-04: [https://arxiv.org/abs/2504.04150 Reasoning on Multiple Needles In A Haystack]
* ''Mitigation:''
+
* '''Mitigation:'''
 
** 2023-10: [https://arxiv.org/abs/2310.01427 Attention Sorting Combats Recency Bias In Long Context Language Models]
 
** 2023-10: [https://arxiv.org/abs/2310.01427 Attention Sorting Combats Recency Bias In Long Context Language Models]
 
** 2024-07: [https://arxiv.org/abs/2407.01100 Eliminating Position Bias of Language Models: A Mechanistic Approach]
 
** 2024-07: [https://arxiv.org/abs/2407.01100 Eliminating Position Bias of Language Models: A Mechanistic Approach]

Latest revision as of 11:24, 11 June 2025

Prompt Engineering

In-Context Learning

Chain of Thought (CoT)

Multi-step

Tool-use, feedback, agentic

Retrieval-Augmented Generation (RAG)

Input/Output Formats

Position Bias

Generation