Difference between revisions of "Exocortex"
KevinYager (talk | contribs) (→For Science) |
KevinYager (talk | contribs) (→Theory) |
||
| Line 27: | Line 27: | ||
* 2025-05: Andy Clark: [https://www.nature.com/articles/s41467-025-59906-9 Extending Minds with Generative AI] | * 2025-05: Andy Clark: [https://www.nature.com/articles/s41467-025-59906-9 Extending Minds with Generative AI] | ||
* 2025-12: Google: [https://arxiv.org/abs/2512.16856 Distributional AGI Safety] | * 2025-12: Google: [https://arxiv.org/abs/2512.16856 Distributional AGI Safety] | ||
| + | * 2026-02: [https://arxiv.org/abs/2602.11865 Intelligent AI Delegation] | ||
=For Science= | =For Science= | ||
Revision as of 09:50, 13 February 2026
An exocortex is a hypothetical extension of human intelligence into artificial machinery. One proposal is to exploit a swarm of AI agents to form a science exocortex.
Contents
Human-Computer Interaction (HCI)
See also: Human Computer Interaction.
- fabric: open-source framework for augmenting humans using AI
Data visualization tools
Research
- 2025-03: The Cybernetic Teammate: A Field Experiment on Generative AI Reshaping Teamwork and Expertise
- 2025-10: Artificial Hivemind: The Open-Ended Homogeneity of Language Models (and Beyond)
- 2025-12: AI & Human Co-Improvement for Safer Co-Superintelligence
- 2025-12: Quantifying Human-AI Synergy
Extension
- Extended Mind Thesis: In practice, the mind extends into the physical world
Theory
- 1986: Marvin Minksy: The Society of Mind
- 2019-03: Scott Alexander: Translating Predictive Coding Into Perceptual Control
- 2025-03: Richard Ngo: Towards a scale-free theory of intelligent agency
- 2025-04: Richard Ngo: Well-foundedness as an organizing principle of healthy minds and societies
- 2025-05: Andy Clark: Extending Minds with Generative AI
- 2025-12: Google: Distributional AGI Safety
- 2026-02: Intelligent AI Delegation
For Science
- 2024-06: Towards a Science Exocortex
- 2025-12: Probing Scientific General Intelligence of LLMs with Scientist-Aligned Workflows