Security

NCSC Calms Fears Over ChatGPT Threat

A leading UK security agency has claimed there’s a low risk of ChatGPT and tools like it effectively democratizing cybercrime for the masses, but it warned that they could be useful for those with “high technical capabilities.”

National Cyber Security Centre (NCSC) tech director for platforms research, David C, and tech director for data science research, Paul J, acknowledged fears over the security implications of large language models (LLMs) like ChatGPT.

Some security experts have suggested that the tool could lower the barrier to entry for less technically capable threat actors, by providing information on how to design ransomware and other threats.

Read more on ChatGPT threats: Experts Warn ChatGPT Could Democratize Cybercrime.

However, the NCSC argued that LLMs are likely to be more useful for saving hacking experts time than teaching novices how to carry out sophisticated attacks.

“There is a risk that criminals might use LLMs to help with cyber-attacks beyond their current capabilities, in particular once an attacker has accessed a network. For example, if an attacker is struggling to escalate privileges or find data, they might ask an LLM and receive an answer that’s not unlike a search engine result, but with more context,” the agency claimed.

“Current LLMs provide convincing-sounding answers that may only be partially correct, particularly as the topic gets more niche. These answers might help criminals with attacks they couldn’t otherwise execute, or they might suggest actions that hasten the detection of the criminal.”

LLMs could also be deployed to help technically proficient threat actors with poor linguistic skills to craft more convincing phishing emails in multiple languages, it warned.

However, the NCSC added that there is currently “a low risk of a lesser skilled attacker writing highly capable malware.”

The agency also warned about potential privacy issues resulting from queries by corporate users that are then stored and made available to the LLM provider or its partners to view.

“A question might be sensitive because of data included in the query, or because [of] who is asking the question (and when),” it said.

“Examples of the latter might be if a CEO is discovered to have asked ‘how best to lay off an employee?,’ or somebody asking revealing health or relationship questions. Also bear in mind aggregation of information across multiple queries using the same login.”

Queries stored online, including potentially sensitive personal information, might be hacked or accidentally leaked, the NCSC added.

As a result, terms of use and privacy policies need to be “thoroughly understood” before using LLMs, it argued.

Articles You May Like

Amazon confirms another round of layoffs, impacting 9,000 people in AWS, Twitch and other units
New DotRunpeX Malware Delivers Multiple Malware Families via Malicious Ads
Windows 11 also vulnerable to “aCropalypse” image data leakage
Bitcoin ATM customers hacked by video upload that was actually an app
When the tech IPO market reopens, keep an eye on HR unicorns

Leave a Reply

Your email address will not be published. Required fields are marked *