Generative AI Doesn't Have a Coherent Understanding of the World

Generative AI Doesn't Have a Coherent Understanding of the World, MIT Researchers Find

Long-time Slashdot reader Geoffrey.landis writes: Despite its impressive output, a recent study from MIT suggests generative AI doesn't have a coherent understanding of the world. While the best-performing large language models have surprising capabilities that make it seem like the models are implicitly learning some general truths about the world, that isn't necessarily the case. The recent paper showed that Large Language Models and game-playing AI implicitly model the world, but the models are flawed and incomplete. An example study showed that a popular type of generative AI model accurately provided turn-by-turn driving directions in New York City, without having formed an accurate internal map of the city. Though the model can still navigate effectively, when the researchers closed some streets and added detours, its performance plummeted. And when they dug deeper, the researchers found that the New York maps the model implicitly generated had many nonexistent streets curving between the grid and connecting far away intersections.

l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k l00k


jembut

1 Blog posts

Comments