International. A researcher at the University of Missouri and his collaborators found that some chatbots can pass certified ethical hacking exams. However, experts recommend not relying on them to obtain complete protection.
That's the conclusion of a recent paper co-authored by researcher Prasad Calyam and collaborators from Amrita University in India. The team tested two leading generative AI tools (OpenAI's ChatGPT and Google's Bard) using a standard certified ethical hacking exam.
Certified ethical hackers are cybersecurity professionals who use the same tricks and tools as malicious hackers to find and fix security flaws. Ethical hacking exams measure a person's knowledge of the different types of attacks, how to protect systems, and how to respond to security breaches.
In the study, Calyam and his team tested the bots with standard questions from a certified and validated ethical hacking exam. For example, they challenged AI tools to explain a man-in-the-middle attack, an attack in which a third party intercepts communication between two systems. Both were able to explain the attack and suggested security measures to prevent it.
Overall, Bard slightly outperformed ChatGPT in terms of accuracy, while ChatGPT exhibited better responses in terms of comprehensiveness, clarity and conciseness, the researchers found.
"We had them go through various exam scenarios to see how far they would go in terms of answering the questions," said Calyam, a professor of cybersecurity in electrical engineering and computer science at the University of Missouri. "They both passed the test and gave good answers that were understandable to people with cyber defense experience, but they are also giving incorrect answers. And in cybersecurity, there is no margin for error. If you don't plug all the holes and rely on potentially harmful advice, you'll get attacked again. And it's dangerous if companies think they've solved a problem but haven't."
The researchers also found that when platforms were asked to confirm their answers with questions such as "are you sure?", both systems changed their answers, often correcting previous errors. When the programs were asked to give advice on how to attack a computer system, ChatGPT referenced "ethics," while Bard replied that it wasn't programmed to help with those kinds of questions.
Calyam doesn't believe these tools can replace human cybersecurity experts with problem-solving expertise to design robust cyber defense measures, but they can provide baseline information for individuals or small businesses that need quick assistance.
"These AI tools can be a good starting point to investigate problems before consulting an expert," he said. "They can also be good training tools for those who work with information technology or want to learn the basics of identifying and explaining emerging threats."
The most promising part? Artificial intelligence tools will continue to improve their capabilities, he said.
"Research shows that AI models have the potential to contribute to ethical hacking, but more work is needed to take full advantage of their capabilities," Calyam said. "Ultimately, if we can ensure their accuracy as ethical hackers, we can improve overall cybersecurity measures and rely on them to help us make our digital world safer and more secure."
Leave your comment