Can AI Understand Right and Wrong?
Tue Mar 18 2025
The idea that the truth is organized and connected is a big reason why people are excited about the progress of Large Language Models (LLMs). These models try to understand the world by learning from lots of data. The hope is that if the truth is organized, LLMs can fill in missing pieces and fix mistakes in their training data. This could help them give a complete picture of the world.
But here is the twist. Philosophers have a different view. They say that in areas like ethics and values, the truth isn't organized in the same way. This makes it hard for LLMs to make progress in these areas. Why? Because LLMs rely on patterns and connections to learn. If the truth isn't organized, they can't use their usual methods. This means LLMs might struggle with tasks that involve making decisions about right and wrong.
So, what does this mean for us? It means that when it comes to making important decisions, we can't just rely on AI. We need to use our own judgment and values. AI can help, but it can't replace human thinking, especially in areas where the truth isn't clear-cut. This is a big deal because it shows that even with all their power, LLMs have limits. They can't do everything for us. We still need to be involved, especially when it comes to making decisions about what's right and wrong.
In simple terms, AI is great at finding patterns and connections. But when it comes to areas like ethics, where the truth isn't organized, AI struggles. This is because AI relies on patterns to learn. Without them, it's like trying to solve a puzzle without all the pieces. So, while AI can help us, it can't do our thinking for us. We still need to use our own judgment, especially when it comes to making important decisions.
https://localnews.ai/article/can-ai-understand-right-and-wrong-3f38c88d
continue reading...
questions
If truth is so systematic, why do people still argue about whether pineapple belongs on pizza?
How can the consistency and coherence of true statements be empirically verified across all domains?
How do we define 'systematicity' in the context of truth, and is it a universally applicable concept?
actions
flag content