Summary
With the advancement of AI Agent technologies, language models have increasingly demonstrated human-like characteristics, particularly in applications involving companionship and psychological counseling. As these models become more proficient in simulating human conversation, new social engineering attack strategies have emerged in the domain of fraud. Malicious actors can now exploit large language models (LLMs) in conjunction with publicly available user information to engage in highly personalized dialogue. Once a sufficient level of familiarity is established, these interactions may lead to phishing attempts or the extraction of sensitive personal data. This study proposes a method for investigating social engineering attacks driven by language models, referred to as ECSE (Exploring Chat-based Social Engineering). We utilize several open-source models—GPT-4o, GPT-4o-mini, LLaMA 3.1, and DeepSeek-V3—as the foundation for this framework. Through prompt engineering techniques, we collect experimental data in a sandbox to evaluate the conversational capability and operational efficiency of these models within a static social context.
Cite this work:
@misc {
title={
Exploration Chat-Based Social Engineering
},
author={
Ivan Lee, Amber Chang
},
date={
3/30/25
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}