International. Watchguard shares below an interesting analysis of how hackers use technology in various phishing and social engineering strategies.
If we talk about Artificial Intelligence (AI) we can appreciate that the advances that are being achieved are increasingly surprising.
Through its implementation, companies can provide value to their services, automate complex tasks and improve day after day in their interaction with customers.
However, it should be noted that AI is a tool and as such the result of its application is based on use.
From the malicious use of AI new threats arise, some as harmless acts, others are criminal activities, such as systems that pretend to be human to bypass mechanisms or fake chatbots that request the entry of sensitive information.
Threats from AI-driven chatbots are considered one of the Information Security Industry Predictions that the WatchGuard research team developed based on the analysis of security and threat trends that occurred during 2018.
"Cyber criminals continue to modify the threat landscape as they update their tactics and intensify their attacks against businesses, governments, and even Internet infrastructure," said Corey Nachreiner, Chief Technology Officer, WatchGuard Technologies.
"In this scenario, SMEs continue to be the target of cybercriminals, so they must begin to review their current security measures and make the security of their networks a high priority, seeking to implement solutions through managed services companies," added the executive.
The actions of black hat hackers are implemented through malicious chat rooms on legitimate sites.
"The goal is to direct victims to access the malicious link and thereby download files containing malware or share private information, such as passwords, emails, credit card numbers or bank access codes," he explains.
Through virtual assistants or chatbots, hackers find new attack vectors.
A hacked chatbot could divert victims to malicious rather than legitimate links. Attackers could also take advantage of web application flaws on legitimate sites to insert a malicious chatbot.
"For example, a hacker might force the appearance of a fake chatbot while the victim is on a banking website, asking if they need help finding something. The chatbot could then recommend that the victim click on malicious links to fake banking resources instead of linking to the real ones. Those links could allow the attacker to do anything from installing malware to virtually hijacking the bank's site connection," Nachreiner explains.
To try to detect malicious chatbots, the Chief Technology Officer advises that it is important to ensure in all cases that the communication is encrypted, in addition to regulating the way in which the data of those chat sessions is managed or stored.
In summary, it is vitally important that those responsible for access and systems and computer security of organizations of all sizes implement not only the appropriate security measures; training employees in the actions of hackers and how to be vigilant to prevent them, should be part of a regular update activity to be aware of any suspicious activity.
Leave your comment