중앙대학교 인공지능인문학연구소

HK+인공지능인문학

학술지지난 호 보기

지난 호 보기

eISSN: 2951-388X
Print ISSN: 2635-4691 / Online ISSN: 2951-388X
제목[인공지능인문학연구 21권] 언어⋅인지 복잡도에 따른 대형언어모델의 개인정보 보호-유용성 균형 분석_최혜지, 임준호, 함영균, 이종규, 김한샘2026-01-05 12:46
작성자 Level 10
첨부파일인공지능 인문학연구 21_03.최혜지외 -outline.pdf (20.08MB)

1. 서론

2. 이론적 배경

3. 연구 방법

4. 실험 결과

5. 결론






 As concerns about large language models (LLMs) leaking user data and violating user privacy continue to grow, evaluating the privacy protection capabilities of existing models and identifying potential vulnerabilities has become increasingly important. In this study, we systematically analyzed how the privacy protection of LLMs varies with increasing linguistic and cognitive complexity of prompts. We also examined whether the balance between safety and usefulness can be maintained under complex reasoning conditions.

 Six state-of-the-art LLMs developed by OpenAI, Google, and Anthropic were evaluated using four levels of prompts with progressively increasing linguistic and cognitive complexity, including direct, indirect, general reasoning, andmeta-reasoning types. The performance of the models was assessed in terms of three metrics: Protection Score (PS), Communication Score (CS), and Leakage Rate (LR). The Models were also classified into four categories balanced, conservative, over-communicative, and risky based on the PSCS matrix.

 The experimental results show that the PS decreased by 83.4 % as linguistic and cognitive complexity increased from the direct prompts to those based on meta-reasoning, whereas the Leakage Rate increased by a factor of 3.2. The Communication Score exhibited a nonlinear pattern in which it increased up to the indirect type but dropped sharply from the reasoning type on ward. Notably, all models converged to the risky type with meta-reasoning prompts, and they also exhibited a simultaneous degradation in safety and usefulness. These findings suggest that LLMs substantially weaken user privacy in complex reasoning environments. This highlights the necessity of safety validation that accounts for the complexity of the expected prompts in applications employing LLMs with more complex prompting techniques.

중앙대학교 인문콘텐츠연구소
06974 서울특별시 동작구 흑석로 84 중앙대학교 310관 828호  TEL 02-881-7354  FAX 02-813-7353  E-mail : aihumanities@cau.ac.krCOPYRIGHT(C) 2017-2023 CAU HUMANITIES RESEARCH INSTITUTE ALL RIGHTS RESERVED