
The OWASP Top 10 2017 introduces the risk of insufficient logging and monitoring. Indeed, inherent problems in this practice are often underestimated and misunderstood. But why is a seemingly simple task ending up being a crucial point of information system security?
Logging is a term meaning the management of logs. Logs are event records where events related to the state of a system are collected. There are a multitude of logs for different systems.
Let’s take the example of a web application: logs can be any action performed on the web service, such as a user’s connection to the platform, an HTTP error generation or an access to a resource on the server.
A large amount of data is quickly collected, which implies an important material and human cost. In addition, for logs to be useful, they require the following actions:
The contextualization of the logs is the part that requires the most experience and knowledge of the monitored system, to know which information should be retained and which is useless. This task also requires a lot of human time.
Once all these actions have been performed, the logs will allow investigating a malfunction in the application so that it does not happen again. In the case of an attack, this will make it possible to know the actors at the origin of the incident. Moreover, it will be possible to know which functionality has been abused, in order to correct the flaw that allowed the attack.
Monitoring, or supervision of an application, is the ability to have a global view of an application at a given moment but also a history of past states for several elements:
Monitoring is also important to detect any lack of server performance and to detect attacks in real time. Indeed, if a server requires high availability, monitoring user actions allows to identify which functionality of the application requires a lot of resources and would be likely to cause slowdowns. For an attack, if a large number of connections are coming to the service, a denial of service attempt may be in progress. An alert could allow the security team to react, for example, by blocking IP addresses making partial TCP connections or too many TCP connections too quickly.
In order to detect these anomalies, a global supervision tool must be used to centralize the different logs. This tool needs to interrogate in real time the services to be monitored. It can be based on multiple elements, called metrics, such as:
The supervision of these elements must allow the creation of events (alerts). These elements are significant state changes. This can be a too high CPU load, a push to a repository, a build error, a too high number of simultaneous TCP connections. For an efficient follow-up, it is then necessary to set criticality levels on the events. This allows you to process them in order of priority, as in a ticket management application.
Logging and monitoring are often considered the same, because the monitoring system has logs as its main data, and without quality logs, there is no effective monitoring. However, log analysis should not be confused with monitoring. Log analysis is post-incident work, while monitoring is permanent work.
As we have just seen, the implementation of such techniques can be very complex. Indeed, one must be able to store, sort and process this information. Without a good knowledge of the elements to be monitored, several problems can occur:
The accumulation of these problems makes the logs unusable. The monitoring systems then become more of a constraint and a waste of time than a help. This is known as insufficient logging and monitoring, which can quickly become a big problem and an important vulnerability.
Once logging management is no longer efficient, it actually becomes complicated for the development team to detect a problem before the impact is significant. An attacker could therefore hide inside an application or a system without being detected before he performs harmful actions.
Indeed, the majority of computer attacks could be anticipated and/or stopped if the logging and monitoring systems are correctly configured. There are a multitude of real cases that demonstrate the danger of such a vulnerability.
For example, the company Citrix providing a digital workspace platform found out that attackers were infiltrating their network only 6 months after their intrusion (from October 2018 to March 2019). This allowed them to steal and delete employee files (name, social security number and financial information). The intrusion was carried out by brute-force attacks on passwords of user accounts (source). This type of attack could have been detected much earlier if a monitoring system had detected a large number of erroneous password attempts.
It is therefore important to select the right information so as not to be drowned by alerts, that would be otherwise ignored.
We have seen how complex it is to set up an efficient logging and monitoring system. In order to help you on this point, we are presenting you some best practices to put in place to facilitate the implementation and increase the efficiency of such systems.
Once the various systems have been put in place, it is now necessary to evaluate their effectiveness.
A very simple first indicator to check is the fact that no alerts are raised for a long time. Indeed, there is always an anomaly to be reported even if it is a simple piece of information. Moreover, if the information system has a known problem and the monitoring system does not raise any alert, there is necessarily a problem with the system configuration.
A good test to perform would be to run a vulnerability scanner such as OpenVas or Burp on your server and application. This type of scanner should raise a multitude of alerts. Moreover, depending on the tests they perform, they will allow you to add information to the alerts that are raised. For example, if you configure a scanner to test command injection on a feature, the alerts that are raised for abuse of functionality could be classified as command injection attempts.
Once these internal tests and adjustments are done, one or more application penetration tests are very good tests for your monitoring systems. Indeed, they often enable to highlight potential problems. However, it is not possible for a pentester to evaluate whether the audited company conscientiously performs the log management and supervision of the audited server. Generally speaking, all actions taken by a pentest must raise alerts on your system.
To be exhaustive, it is possible to perform an internal audit in white box mode: in this case, the auditor could have access to the entire infrastructure and verify in real time that, based on the tests performed, the corresponding alerts are raised.