The paper was entitled "AI2 : Training a big data machine to defend".
It starts from a position of recognising that you do need a human analyst in the loop. However, it attempts to show how those charged with defending against the ever increasing volumes of attacks can avoid suffering information overload by using machine learning to spot those activities which need further investigation.
For those who don't want to read the paper there is a nice video to go with it:
The system beings with a supervised learning module whereby the analyst can apply his expertise to enable the system to spot suspicious behaviour. Where this system seems to have a significant advantage over others I've seen in the past is that it then combines this with an unsupervised method of detecting outliers in the behaviour.
This combined approach more than triples the detection rates but, probably more importantly for the human workload, it reducing the false positives five fold. That's impressive!
The learning data sets were real world logs (nearly 4 millions lines of it) which revealed not just that the system was capable of finding those attacks found by the analysts but was also able to draw attention to attacks that had previously gone unnoticed.
Bearing in mind that these methods get better with more data to learn from, and that the human can continue to give feedback to help improve the "supervised learning", this method really does appear to hold great potential. I don't think we're going to see the human taken out of the loop any time soon, but this form of analysts assistance via machine learning is likely to be something we see more of.
As attacks become more automated in an attempt to overwhelm our defences, it seems only logical that artificial intelligence is employed to bolster those defences. It's almost the cyber equivalent of setting a thief to catch a thief.