HUMAN  LIKE  BUT  MORE  POWERFUL

A neural network is able to detect any possibility of misuse happened, which allows the system administrator to protect their entire organization through enhanced flexibility against intrusions.

The Trained Model Has To Be Interpretable

It should be easy for us to assess any false alarm and quality of the model regardless the complexity. Most of the model families used currently, like deep neural networks, are called black box models. Black box models are given the input X, and they will produce Y through a complex sequence of operations that can hardly be interpreted by a human. This could pose a problem in real-life applications. For example, when a false alarm occurs, and we want to understand why it happened, we ask whether it was a problem with a training set or the model itself. The interpretability of a model determines how easy it will be for us to manage it, assess its quality and correct its operation.

 

Deep Learning Against Rare Attacks

Machine learning typically is much of use when malware/anomaly samples are present in the training set. But some malwares/attacks/anomalies are so rare that we don't have enough samples, actually if it's a new kind of attack then we just have one sample to train our model with. In this case, we perform discriminative unsupervised feature learning with exemplar convolutional neural networks. Also, deep learning can perform feature extraction and classification in one pipeline.

Adasec - Adaptive cyber security