top of page
Security Operations Team

Malware Evades Machine Learning


One of the first things we noted when we started working with TensorFlow was that some samples were smart enough to evade detection. Threat actors are driven by the need to make money (think North Korea) while others just want fame and to be recognized. Others want to get a foothold in your network so they can steal proprietary trade secrets.

We thought it would be a good discussion to talk about how technology is evolving and also how threat actors are evolving to make it more difficult for security researchers to find malicious code and activity. Every 2 years we have seen that the number of malware samples have roughly doubled (and those are the samples that we know about) so it stands to reason that there are probably equally threatening code that we do not know about. This is why machine learning is so critical and it has to be deployed as close to your end users as possible to have the visibility to be able to do something about threats as they are being seen. Not only do your analytics have to be running close to the end users, but they must run fast enough to disrupt activity to prevent infections of previously unknown samples while alerting security engineers that a previously unknown malicious virus is now a known malicious virus (continual update of signatures through AI).

Granted not all detections will be malicious and analytics are only as good as they have been trained, but the vast majority of these threats will now be known to the vendor supplying the threat intelligence, the subscribers of that service and if not protected properly.

Evading Technology

Threat actors are getting smart though. We are starting to see samples that look for things like virtual environments as well as to look for tools that could be indications that the malicious code is being analyzed. We have even come across malware that is destructive towards containers in which they are being tested. This is the new world we face. Threat actors do not want us to be able to detect, defeat or otherwise render their malware useless. It's a costly game of cat and mouse but it doesn't have to be that way.

Use of Models to Detect Activity and Processes

One of the best and most effective methods for detecting malware is to look for the actions taken by that malware in the past. For instance as this cat and mouse game takes play, there are certain things that we can look for to help security researchers detect new and emerging malware. Here are a few of the standard models we have employed in our FirstWatch sensors:

  • Elimination of known scanners - Used to cut down on noise, don't alert on what is known

  • Port scan rollups - When detecting portscans, list the ports and activity once

  • Domain lookups - used to find sinkhole domains, non existent domains and more

  • IP Information lookups - In order for malware to work, it has to know who is infected so they know who was infected and what information may be available for exploitation

  • Content in HTTP headers - An indication of exfiltration

  • DNS Data - streams of data via DNS lookups - common method of exfiltration

  • And many more

There are many models that can be employed. While these models are great and help security engineers find targeted malware and new and emerging threats, the data still must be reviewed by a human to confirm what our models are telling us. The more data provided to the models and the longer the training period, the more accurate they become at detection. These human guided models allow direct feedback into the model to teach the model what is of critical importance and cuts down on false notifications over time and the model is trained to know what to look for.

12 views0 comments

Recent Posts

See All
bottom of page