Detecting Phishing Sites by Using ChatGPT

The rapid advancement of artificial intelligence (AI) has led to the rise of large language models (LLMs) such as ChatGPT, which have significantly impacted various domains, including natural language processing and AI. While these models have been extensively researched for tasks such as code generation and text synthesis, their application in cybersecurity, particularly in detecting malicious web content like phishing sites, has been largely unexplored. A recent study has taken a step towards filling this gap by proposing a novel method for detecting phishing sites using ChatGPT.

Ensar Seker
3 min readJul 15, 2023

Phishing sites pose a severe threat to internet users. They masquerade as legitimate platforms, employing social engineering techniques to trick users into revealing sensitive information or causing financial harm. The growing threat of automated cyberattacks facilitated by LLMs necessitates the automation of malicious web content detection. This is where the researchers saw an opportunity to leverage the power of LLMs to analyze and classify phishing sites.

The proposed method involves using a web crawler to gather information from websites and generate prompts based on this collected data. These prompts are then presented to ChatGPT, which determines whether a given website is a phishing site or not. The integration of web crawling and ChatGPT’s contextual understanding enables informed decisions concerning the legitimacy or suspiciousness of websites. By employing ChatGPT, the researchers were able to detect various phishing sites without the need for fine-tuning machine learning models and identify social engineering techniques in the context of entire websites and URLs.

To evaluate the performance of the proposed method, the researchers conducted experiments using a carefully curated dataset for phishing site detection. The experimental results using GPT-4 demonstrated promising performance, with a precision of 98.3% and a recall of 98.4%. Moreover, a comparative analysis between GPT-3.5 and GPT-4 revealed a significant improvement in the latter’s capabilities, particularly in terms of reducing false negatives. GPT-4 outperformed GPT-3.5 in its ability to determine the suspiciousness of domain names, identify social engineering techniques from the website content, and provide comprehensive phishing detection by considering multiple factors.

These findings emphasize the potential of LLMs in efficiently detecting phishing sites, particularly in uncovering social engineering techniques aimed at psychologically manipulating users. The results of this study have significant implications for enhancing automated cybersecurity measures and mitigating the risks of online fraudulent activities faced by users.

In summary, the researchers have made significant contributions to the field of cybersecurity. They have proposed a novel method for detecting phishing sites using ChatGPT, demonstrated the effectiveness of LLMs in identifying phishing sites, and found notable improvements in GPT-4’s ability to identify phishing sites, particularly in minimizing false negatives. This research not only offers a promising solution to phishing detection but also opens up new avenues for leveraging the power of LLMs in the broader field of cybersecurity.

The full paper provides a more detailed explanation of the research and can be accessed here. The researchers’ innovative approach to phishing detection using LLMs is a testament to the potential of AI in enhancing cybersecurity measures and protecting users from fraudulent online activities. As we continue to navigate the digital age, advancements in AI and cybersecurity will undoubtedly play a crucial role in safeguarding our online spaces.

--

--

Ensar Seker
Ensar Seker

Written by Ensar Seker

Cybersecurity | Artificial Intelligence | Blockchain

No responses yet