RSS feed source: National Science Foundation
At Brown University, an innovative new project is revealing that teaching artificial intelligence to perceive things more like people may begin with something as simple as a game. The project invites participants to play an online game called Click Me, which helps AI models learn how people see and interpret images. While the game is fun and accessible, its purpose is more ambitious: to understand the root causes of AI errors and to systematically improve how AI systems represent the visual world.
Over the past decade, AI systems have become more powerful and widely used, particularly in tasks like recognizing images. For example, these systems can identify animals, objects or diagnose medical conditions from images. However, they sometimes make mistakes that humans rarely do. For instance, an AI algorithm might confidently label a photo of a dog wearing sunglasses as a completely different animal or fail to recognize a stop sign if it’s partially covered by graffiti. As these models become larger and more complex, these kinds of errors become more frequent, revealing a growing gap between how AI and humans perceive the world.
Recognizing this challenge, researchers funded in part by the U.S. National Science Foundation propose to combine insights from psychology and neuroscience with machine learning to create the next generation of human-aligned AI. Their goal is to understand how people process
Click this link to continue reading the article on the source website.