Difference between revisions of "AI tricks"

From GISAXS
Jump to: navigation, search
(Prompt Engineering)
(Prompt Engineering)
 
(4 intermediate revisions by the same user not shown)
Line 3: Line 3:
 
* 2024-06: [https://arxiv.org/abs/2406.06608 The Prompt Report: A Systematic Survey of Prompting Techniques]
 
* 2024-06: [https://arxiv.org/abs/2406.06608 The Prompt Report: A Systematic Survey of Prompting Techniques]
 
* 2025-06: [https://arxiv.org/abs/2506.05614 Which Prompting Technique Should I Use? An Empirical Investigation of Prompting Techniques for Software Engineering Tasks]
 
* 2025-06: [https://arxiv.org/abs/2506.05614 Which Prompting Technique Should I Use? An Empirical Investigation of Prompting Techniques for Software Engineering Tasks]
 +
* 2025-07: [https://arxiv.org/abs/2507.13334 A Survey of Context Engineering for Large Language Models]
 +
* 2025-08: [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5375404 Prompting Science Report 3: I'll pay you or I'll kill you -but will you care?]
 +
* 2025-08: OpenAI: [https://cookbook.openai.com/examples/gpt-5/gpt-5_prompting_guide GPT-5 prompting guide]
  
 
==In-Context Learning==
 
==In-Context Learning==
Line 24: Line 27:
 
* 2024-11: [https://arxiv.org/abs/2411.10541 Does Prompt Formatting Have Any Impact on LLM Performance?]
 
* 2024-11: [https://arxiv.org/abs/2411.10541 Does Prompt Formatting Have Any Impact on LLM Performance?]
  
==Position Bias==
+
==Brittleness==
 +
* 2025-03: [https://arxiv.org/abs/2503.01781 Cats Confuse Reasoning LLM: Query Agnostic Adversarial Triggers for Reasoning Models]
 +
 
 +
===Position Bias===
 
* 2023-07: [https://arxiv.org/abs/2307.03172 Lost in the Middle: How Language Models Use Long Contexts]
 
* 2023-07: [https://arxiv.org/abs/2307.03172 Lost in the Middle: How Language Models Use Long Contexts]
 
* 2024-11: [https://arxiv.org/abs/2411.01101 Self-Consistency Falls Short! The Adverse Effects of Positional Bias on Long-Context Problems]
 
* 2024-11: [https://arxiv.org/abs/2411.01101 Self-Consistency Falls Short! The Adverse Effects of Positional Bias on Long-Context Problems]
 
* 2025-02: [https://arxiv.org/abs/2502.01951 On the Emergence of Position Bias in Transformers]
 
* 2025-02: [https://arxiv.org/abs/2502.01951 On the Emergence of Position Bias in Transformers]
 +
* 2025-07: [https://arxiv.org/abs/2507.22887 Where to show Demos in Your Prompt: A Positional Bias of In-Context Learning]
 
* '''Testing models:'''
 
* '''Testing models:'''
 
** [https://github.com/gkamradt/LLMTest_NeedleInAHaystack?utm_source=chatgpt.com Needle-in-a-Haystack tests]
 
** [https://github.com/gkamradt/LLMTest_NeedleInAHaystack?utm_source=chatgpt.com Needle-in-a-Haystack tests]

Latest revision as of 15:07, 11 August 2025

Prompt Engineering

In-Context Learning

Chain of Thought (CoT)

Multi-step

Tool-use, feedback, agentic

Retrieval-Augmented Generation (RAG)

Input/Output Formats

Brittleness

Position Bias

Generation