1. Introduction 2. What is an Expert? 3. Can AI become an Expert? 4. AlphaGo’s Expertise 5. Conclusion 초록
With the rapid development of artificial intelligence (AI), understanding its capabilities and limitations has become significant for mitigating unfounded anxiety and unwarranted optimism. As part of this endeavor, this study delves into the following question: Can AI become an expert? More precisely, should society confer the authority of experts on AI even if its decision-making process is highly opaque? Throughout the investigation, I aim to identify certain normative challenges in elevating current AI to a level comparable to that of human experts. First, I will narrow the scope by proposing the definition of an expert. Along the way, three normative components of experts —trust, explainability, and responsibility—will be presented. Subsequently, I will suggest why AI cannot become a trustee, successfully transmit knowledge, or take responsibility. Specifically, the arguments focus on how these factors regulate expert judgments, which are not made in isolation but within complex social connections and spontaneous dialogue. Finally, I will defend the plausibility of the presented criteria in response to a potential objection, the claim that some machine learning-based algorithms, such as AlphaGo, have already been recognized as experts. |