artificial_general_intelligence

Strong AI / artificial general intelligence

AGI has been ACHIEVED: Q* announced for ChatGPT

https://www.youtube.com/watch?v=qVtoQAO4lLU

Snippet from Wikipedia: Artificial general intelligence

Artificial general intelligence (AGI)—sometimes called human‑level intelligence AI—is a type of artificial intelligence that would match or surpass human capabilities across virtually all cognitive tasks.

Some researchers argue that state‑of‑the‑art large language models already exhibit early signs of AGI‑level capability, while others maintain that genuine AGI has not yet been achieved. AGI is conceptually distinct from artificial superintelligence (ASI), which would outperform the best human abilities across every domain by a wide margin. AGI is considered one of the definitions of strong AI.

Unlike artificial narrow intelligence (ANI), whose competence is confined to well‑defined tasks, an AGI system can generalise knowledge, transfer skills between domains, and solve novel problems without task‑specific reprogramming. The concept does not, in principle, require the system to be an autonomous agent; a static model—such as a highly capable large language model—or an embodied robot could both satisfy the definition so long as human‑level breadth and proficiency are achieved.

Creating AGI is a primary goal of AI research and of companies such as OpenAI, Google, and Meta. A 2020 survey identified 72 active AGI research and development projects across 37 countries.

The timeline for achieving human‑level intelligence AI remains deeply contested. Recent surveys of AI researchers give median forecasts ranging from the early 2030s to mid‑century, while still recording significant numbers who expect arrival much sooner—or never at all. There is debate on the exact definition of AGI and regarding whether modern large language models (LLMs) such as GPT-4 are early forms of AGI. AGI is a common topic in science fiction and futures studies.

Contention exists over whether AGI represents an existential risk. Many AI experts have stated that mitigating the risk of human extinction posed by AGI should be a global priority. Others find the development of AGI to be in too remote a stage to present such a risk.

artificial_general_intelligence.txt · Last modified: 2025/02/01 07:19 by 127.0.0.1

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki