Evading machine learning detection in a cyber-secure world

Next story

Return of the malware titans

With the announcement of a bypass of a popular machine learning detection engine earlier this year, many delusions of grandeur of machine learning technology certainly came crashing down. The bypass consisted of a simple appending of “happy strings” pulled from gaming software onto a number of malware files – a procedure known as overlaying. Thereby, such evil titans as WannaCry and SamSam ransomware quickly eluded detection.

A new machine learning evasion competition

Such incidents are salutary reminders of the need for greater awareness of the true state of machine learning in cybersecurity – both its benefits and pitfalls. In this spirit, Jakub Debski, ESET chief product officer, put his shoulder to the wheel to take on a machine learning static evasioncompetition hosted by VMRay, Endgame, and MRG-Effitas this past August. Despite the stiff competition and technical problems in this first-ever hosted competition, Debski made ESET proud by achieving a top score with 133 out of 150 points. [Note: Placing and score is subject to final revision due to issues with the testing environment. Stay tuned to Endgame’s blog for any forthcoming official results.]

The steps taken were not easy. Altering 50 malware samples to avoid detection from three machine learning detection models (two MalConv models and one EMBER LightGBM model) while ensuring fully functioning malware after modification is no mean feat.

First, Debski discovered that he could manipulate a header entry in the malware samples that was acting as a particularly strong feature for the EMBER classifier. The trick here was to apply a customized UPX packer and then run a fuzzing script to trick the EMBER classifier about the “benign-looking” character of the malware. UPX is a common enough packer that many machine learning engines would fail to flag.

The second part of the competition remained – how can an attacker bypass the two MalConv machine learning models? Debski appended some very high-scoring “happy strings” on to the malicious samples – the good ol’ overlaying trick again – which ultimately led to the end of the competition.

You can check out Debski’s play-by-play of the competition here.

 

Real-world attackers upping the ante

To be sure, attackers have much more in their arsenal than the simple evasion techniques allowed in the competition. Perhaps it was “thinking too much as an attacker” that gave other competitors a slight head start before Debski put aside his “attacker’s hat,” donned his “player’s jersey,” and buckled down on gaining the full competition points. Real life is messier than competitions, and there are no constraints to an attacker’s methods. Just like Debski’s preliminary forays in the competition, an attacker would most likely first search for a hole in the security environment – think crashing the feature extractor – as a leverage to entirely circumvent the useful application of any machine learning detections set in place.

Alternatively, an attacker could also take a more dynamic approach by using self-extractors and droppers that would easily breeze past any kind of static machine learning models, as employed in this competition. Since a static analysis is restricted to a point in time, it is unable to reveal the nefarious behavior hidden in a time-delayed self-extractor. Moreover, when the data is compressed and/or encrypted inside a self-extractor, it is unable to extract anything useful – it’s all noise. Training the machine learning algorithm to detect all self-extractors – which are commonly contained in clean application installers – would also cause too many false positives.

Improve, layer, and protect

Ultimately, what this machine learning competition demonstrated is the precariousness of betting all your money on one horse – machine learning. Realizing the wide proliferation of ready-to-use machine learning evasion techniques, ESET places great emphasis on using skilled and experienced malware analysts to supplement and ensure that machine learning detection algorithms are not left entirely to their own mysterious machinations.

That’s why building a machine learning engine to detect malware, like ESET’s Augur, is a continued responsibility on the part of ESET. A machine learning engine needs to be constantly updated as the behaviors of malicious actors become increasingly sophisticated and creative. That fresh data is a crucial input for the quality of any machine learning detection engine. At ESET, we’ve spent three decades “tinkering the engine” to reduce false positives and misses.

Another method to improve machine learning detection is to ensure that you are unpacking data before feeding it into a machine learning engine. This protects better against machine learning evasion techniques via self-extractors, and makes it possible to find,emulate, and analyze malware at the behavioral level.

Each endpoint and device protected by ESET with ESET LiveGrid® turned on benefits from Augur’s ability to analyze emerging threats. Augur also works offline as a lightweight Advanced Machine Learning module to protect machines even when they are running without internet access. Enterprise clients have Augur at their disposal via ESET Dynamic Threat Defense (EDTD).

While machine learning is a great technology, no business can rely on it alone to stop all adversaries. A robust security posture for a business demands multilayered defenses, like ESET’s UEFI Scanner and Advanced Memory Scanner, that can protect endpoints from all vectors. Clever adversaries might know how to get past machine learning defenses, but they can’t get far when other layers are also in place.

*For further perspectives on Endgame’s co-sponsoring of this machine learning static evasion competition check out their blog post here.