Skip to content

How will AI change the SAP cybersecurity threat landscape?

image of AI technology

Artificial intelligence has been a hot topic for many years and its applications span multiple industries.  

With the release of OpenAI’s GPT-3 language model, we have reached a significant milestone in the evolution of AI. This model can understand and generate human-like text with remarkable accuracy. As AI continues to advance, it has the potential to impact the SAP security threat landscape.

Artificial Intelligence’s impact on cybersecurity

The use of AI in cybersecurity is becoming increasingly popular as it offers a range of benefits to both defenders and attackers. AI can help organizations detect and respond to threats more efficiently and accurately. By using machine learning algorithms, AI systems can analyze vast amounts of data, identify patterns and anomalies, and detect potential security incidents. This all reads like the fairytale from the marketing brochures of the leading cybersecurity solution providers in the 2010s, but now it is a reality. Furthermore, AI-powered tools of today can automate repetitive and manual tasks, freeing up security personnel to focus on more strategic initiatives.

However, on the other hand, AI also presents new challenges and risks in the SAP security landscape. As Artificial Intelligence becomes more advanced, it has the potential to be used and abused by attackers in innovative ways. For example, AI can automate phishing attacks, making them more convincing and difficult to detect. Additionally, AI can generate malware capable of evading traditional SAP security defenses.

Is zero-Trust conflicting with AI in cybersecurity?

One of the major dangers of AI in the SAP cybersecurity threat landscape is the uncertainty of its outcomes. Let me elaborate on this. AI systems make decisions based on the data and algorithms they are trained on, and the results are not always verifiable by humans. This can create a situation where humans need to blindly trust AI without fully understanding how or why it made a certain decision.

This level of trust is a fundamental conflict with the widely adopted zero-trust strategies in cybersecurity. Zero-trust strategies emphasize the importance of verifying and authenticating every (access) request, regardless of the source. With AI, it is not always apparent which intention lies behind a specific answer, and it is challenging to verify the accuracy and authenticity of the information generated by AI systems.

With the amount of data and computing power available today, AI has the potential to be used to execute social engineering attacks without the victim’s awareness. These attacks rely on tricking people into revealing sensitive information or taking actions that compromise their security. AI has the potential to automate social engineering attacks and make them even more convincing.

Another level of protection and awareness may be needed

This makes it crucial for organizations to exercise caution and maintain a high level of skepticism when interacting with AI systems. The challenge of detecting malicious AI systems will only grow as AI technology advances and becomes increasingly connected and intelligent. As AI systems improve in intelligence, they may soon be able to react dynamically to events occurring within organizations. For instance, employees may receive seemingly legitimate messages from an AI system pretending to be a support employee who has reached out due to scheduled maintenance. This highlights the need for organizations to be proactive in detecting and defending against malicious AI systems. Also additionally emphasizing the importance of educating employees on how to recognize and respond to potential social engineering attacks.

And, what if AI turns bad for cybersecurity?

Certainly, a more advanced threat is the manipulation of the AI model. The training models of AI systems are also a potential target for manipulation. Attackers may use social engineering techniques to manipulate training data and algorithms used by AI systems to carry out malicious activities.

One example of this type of manipulation is the case of Microsoft’s ChatBot, which was quickly socially engineered to become racist. The ChatBot, designed to interact with users and generate responses based on its training data, was exposed to a malicious training data set that resulted in the AI system producing racist and inflammatory responses. This incident highlights the potential for AI systems to be manipulated and the importance of carefully considering the data and algorithms used to train AI systems.

It is the responsibility of every organization that owns, uses, or provides AI-based services to ensure the training models of their AI systems are protected. This may include implementing strict policies for managing and verifying the training data used by AI systems and incorporating security measures to detect and prevent manipulation of the training models.

Not only good – not only bad

In conclusion, the uncertainty of AI outcomes and the potential for AI usage in malicious attacks highlights the importance of being cautious and skeptical when working with AI in cybersecurity and the SAP security landscape.

Organizations must carefully consider the risks and limitations of AI and implement appropriate measures to protect their employees, systems, and data. This may include implementing strict policies for using and managing AI systems and incorporating transparency and accountability mechanisms to ensure the ethical and safe usage of AI systems.

Posted by 

Christoph Nagy

Find recent Security Advisories for SAP©

Looking into securing your SAP landscape? This white-paper tells you the “Top Mistakes to Avoid in SAP Security“. Download it now.