DIGITAL LIFE
What is artificial general intelligence and how does it differ from other types of AI?
Turns out, training artificial intelligence systems is not unlike raising a child. That's why some AI researchers have begun mimicking the way children naturally acquire knowledge and learn about the world around them—through exploration, curiosity, gradual learning, and positive reinforcement.
"A lot of problems with AI algorithms today could be addressed by taking ideas from neuroscience and child development," says Christopher Kanan, an associate professor in the Department of Computer Science at the University of Rochester, and an expert in artificial intelligence, continual learning, vision, and brain-inspired algorithms.
Of course, learning and being able to reason like a human—just faster and possibly better—opens up questions about how best to keep humans safe from ever-advancing AI systems. That's why Kanan says all AI systems need to have guardrails built in, but doing so at the very end of the development is too late. "It shouldn't be the last step, otherwise we can unleash a monster."
What is artificial general intelligence and how does it differ from other types of AI?...AI involves creating computer systems that can perform tasks that typically require human intelligence, such as perception, reasoning, decision-making, and problem-solving. Traditionally, much of AI research has focused on building systems designed for specific tasks—so-called artificial narrow intelligence (ANI). Examples include systems for image recognition, voice assistants, or playing strategic games, all of which can perform their tasks exceptionally well, often surpassing humans.
Then there is artificial general intelligence (AGI), which aims to build systems capable of understanding, reasoning, and learning across a wide range of tasks, much like humans do. Achieving AGI remains a major goal in AI research but has not yet been accomplished. Beyond AGI lies artificial superintelligence (ASI)—a form of AI vastly exceeding human intelligence in virtually every domain, which remains speculative and is currently confined to science fiction.
In my lab, we're particularly interested in moving closer to artificial general intelligence by drawing inspiration from neuroscience and child development, enabling AI systems to learn and adapt continually, much like human children do.
What are some of the ways that AI can 'learn?'...ANI is successful thanks to deep learning, which since about 2014 has been used to train these systems to learn from large amounts of data annotated by humans. Deep learning involves training large artificial neural networks composed of many interconnected layers. Today, deep learning underpins most modern AI applications, from computer vision and natural language processing to robotics and biomedical research. These systems excel at tasks like image recognition, language translation, playing complex games such as Go and chess, and generating text, images, and even code.
A large language model (LLM) like OpenAI's GPT-4 is trained on enormous amounts of text using self-supervised learning. This means the model learns by predicting the next word or phrase from existing text, without explicit human guidance or labels. These models are typically trained on trillions of words—essentially the entirety of human writing available online, including books, articles, and websites. To put this in perspective, if a human attempted to read all this text, it would take tens of thousands of lifetimes.
Following this extensive initial training, the model undergoes supervised fine-tuning, where humans provide examples of preferred outputs, guiding the model toward generating responses that align closely with human preferences. Lastly, techniques such as reinforcement learning with human feedback (RLHF) are applied to shape the model's behavior by defining acceptable boundaries for what it can or cannot generate.
What are AIs really good at?...They are excellent at tasks involving human languages, including translation, essay writing, text editing, providing feedback, and acting as personalized writing tutors.
They can pass standardized tests. For example, OpenAI's GPT-4 achieves top-tier scores on really challenging tests such as the Bar Exam (90th percentile), LSAT (88th percentile), GRE Quantitative (80th percentile), GRE Verbal (99th percentile), USMLE, and several Advanced Placement tests. They even excel on Ph.D.-level math exams. Surprisingly, studies have shown they have greater emotional intelligence than humans.
Beyond tests, LLMs can serve as co-scientists, assisting researchers in generating novel hypotheses, drafting research proposals, and synthesizing complex scientific literature. They're increasingly being incorporated into multimodal systems designed for vision-language tasks, robotics, and real-world action planning.
No comments:
Post a Comment