We are living in an age where we have access to almost anything online. There is simply no restriction for one to buy clothes, invest in stocks, or like a friend’s picture. Yet with such freedom, there will always be a cost, that one person or the other is willing to exploit.
Keeping safe online is taught to us the moment we start using the internet, so we don’t get malware or viruses. All in all, what we don’t want are cyber-attacks. These online assaults have been present since the birth of the internet, but as our technological capability keeps growing, the greater the risk they pose. For example, in 2014, Yahoo! was hit with a cyberattack affecting 500 million user accounts and 200 million usernames were sold. Making it to date, the largest cyber breach on a single company. This caused $350 million to be cut from the original price Verizon was meant to buy Yahoo! for, leading to the final sale of $4.83 billion.
Yet what is AI doing amidst all of this? Every light has its shadow, on one side AI is at the forefront of it all, helping to protect data and personal information. And on the other, cybercriminals could use real AI-based algorithms to attack companies on a scale that the world has never seen. A typical cyber-crime such as phishing, (in which an email or message is sent from cyber-criminals acting as a well-known company to ‘phish’ out personal details) could now be developed significantly into a more complex and sophisticated attack. In this attack, cybercriminals could use AI to impersonate a friend or family member of its victim to gain information. Also, to breach a firm, hackers can create malware to improve stealth attacks. In which hackers use the malware to blend in with an organisation’s security only then to carry out untraceable attacks.
Hence, it is almost imperative for businesses to deploy cyber AI to not only protect themselves but also their customers. Now the task for thousands of companies is to build their own AI model to detect malware. But building these models require huge amounts of data as models must recognise attacks and counter them. Also, cyber-attacks keep evolving, so AI models need to keep being updated. When finished, these models will be able to detect minute behaviour changes in malware and then remove it from the AI system. An example of this model is implemented in Gmail, which uses machine learning to block out almost an extra 100 million spam messages every day. Having said this, firms may use AI-based models on a much larger scale to protect the entirety of their online network, not only one aspect of it.
It is visible that firms do require their cybersecurity to be of the utmost calibre. Following the new developments in cyber-attacks, keeping their resources and information safe should be highly prioritised amongst many other decisions. At the end of the day, when firms do get attacked by cybercriminals, it will be their cybersecurity that rescues them from dire circumstances.