News
ChatGPT relies on manual handling to find code vulnerabilities: Huawei Algorithm Expert
ChatGPT is a raging storm in the world of AI reply software and big tech companies are continuously talking about its capabilities but it was found that the software heavily relies on manual work to detect code vulnerabilities.
An article published by Huawei China Security AI Algorithm Expert, Dr. Li Zhihua of the Chinese Academy of Sciences, themed “Discussing the impact of ChatGPT on Network Security from a Defense Perspective”. This article aims the revelation of ChatGPT’s use case in the field of network security.
The use of ChatGPT in the field of network security mainly includes the following aspects:
- Threat detection: extracting signature rules and identifying malicious code
- Code audit: analyze code vulnerabilities
- Threat intelligence: extract intelligence from text
- Security operation: generate security operation reports
Mr. Zhihua mentioned an example of taking a relatively simple code audit with SQL injection, which resulted in ChatGPT’s answer being quite satisfactory. However, when he tried a code without SQL injection, ChatGPT began to out “serious nonsense”.
Zhihua added that the above code can effectively prevent SQL injection. The reason for that includes the use of precompilation, which avoids splicing SQL statements, and clearly divides code and data, then detects whether the data type is only numbers, or whether the judgment result is only one, effectively preventing a “drag library” attack.
To conclude, Li Zhihua said that ChatGPT is helpful for general code auditing but it needs to manually verify whether the results it answers are correct or detect code vulnerabilities.
(via – mydrivers)