Skip to main content
TR EN

MSc. Thesis Defense: Ekin MARLALI, LLMS IN CYBERCRIME: UNDERSTANDING THE RISKS AND DEFENSES AGAINST LLM-GENERATED PHISHING MATERIAL

LLMS IN CYBERCRIME: UNDERSTANDING THE RISKS AND

DEFENSES AGAINST LLM-GENERATED PHISHING MATERIAL

 

Ekin Marlalı
Computer Science and Engineering, MSc. Thesis Dissertation, 2025

 

Thesis Jury

Asst. Prof. Orçun Çetin (Thesis Advisor),

Asst. Prof. Süha Orhun Mutluergil,

 Prof. Dr. Budi Arief

 

 

Date & Time: : July 14th, 2025 – 10.00 AM

Place: FENS G015

Keywords : Large Language Models (LLMs), Phishing, Cybersecurity, Eye-Tracking

 

Abstract

 

Large Language Models (LLMs) enable the creation of highly realistic phishing emails, posing an escalating cybersecurity threat. This study evaluated user detection and behavioral responses to phishing emails generated by locally deployed, open-source LLMs through two eye-tracking experiments involving 30 and 50 participants. The Suspicion-Oriented experiment required participants to classify emails as suspicious or legitimate, while the Behavior-Oriented evaluation explored intended real-world actions. Detection rates for AI-generated phishing emails were notably low, with 45 percent in the suspicion-oriented task and 26 percent in the behavior-oriented task. These rates contrast sharply with higher detection rates for traditional phishing emails, which were 80 percent and 86 percent, and legitimate emails, which were 68 percent and 82.5 percent, respectively. Thematic analysis identified five key cognitive strategies used in email evaluation: visual design, linguistic cues, behavioral manipulation recognition, technical indicators, and institutional knowledge. AI-generated emails evaded detection by exhibiting polished visuals and coherent, contextually appropriate language, often exploiting trust in brand familiarity. Eye-tracking data revealed a consistent top-to-bottom scanning pattern with prolonged fixations on email footers during suspicion assessments, whereas in the behavior-oriented setting, attention shifted toward main content and large embedded images. These findings highlight the growing sophistication and accessibility of AI-powered phishing attacks and underscore the urgent need for enhanced, context-aware defenses combining technical safeguards with targeted user training.