MSS News: Targeted Single Use Malware - Hard to Detect with Traditional Tools
While we supply many MSP's with tools to be able to secure and protect their customers, one area up until recently has been lacking. Traditional tools such as firewalls, anti-virus and endpoint protection do little to detect specialty one time use malware. In the next section we will talk about one time use malware, why it's so effective and what you can do about it. In addition we will discuss some of the ways to detect a targeted attack and how we at Jigsaw Security are planning on bringing some innovative solutions to this persistent problem.
One Time Use Malware - What is it?
In short this is when a threat actor writes a virus to specifically attack a single organization. There are various levels of hackers out there, some are just kids using commonly available tools that they download to put together an attack, others are sophisticated nation state threat actors that write malware from scratch. This is important to understand because much of the industry finds and identifies malware using signatures or commonly observable traits. When a piece of malware is written specifically and is not written using recycled source code, it's very difficult to detect.
Polymorphic malware is malware that change it's configuration as well as even the binary signature of itself (MD5 hash, C2 servers, etc, etc.) every time it is run. This makes signatures all but useless in identifying this type of malware since the attributes of the malware change every time it is executed. Many vendors create signatures for their products to detect what we call "known malware". These known malware samples rarely change so they can be defeated using signature based methods of detection which have been in use since the late 1980's.
In short the majority of malware observed is of this type, common off the shelf malware for lack of a better understanding. The type we are discussing know is written by professional or proficient threat actors and operational security of the threat actor causes them to only use it one time, rendering a signature based detection not adequate for detecting it since it was not previously released into the wild before. This single use malware typically is written by well financed threat actors that want to get a foothold on a network and maintain it long term without detection. These same methods are employed by Governments, the tech industry to commit corporate espionage and any number of threat actors including individuals that want to steal information and sell it to the highest bidder. This high valued attack cost more to pull off but ensures a larger payday for the threat actor involved.
Anti-virus companies have a hard time detecting this if they are signature based so even though the target has a firewall and anti-virus installed, the attack can go on for years until something else tips off the security experts in charge of protecting the targeted network.
Seventy percent of all malware attacks are of the common variety. Only 10 to 15 percent are targeted while other categories include worms and botnet like activity that are utilizing exploits that have already been patched but are still effective against those that have failed to patch their systems.
In short the less a piece of malware is circulated, the less likely it is to show up on a anti-virus vendors signatures or watch list.
So how to detect it or prevent it?
In short there are a few methods of detecting this type of malware but it involves changing the way we protect systems. One surefire way to prevent this type of malware from executing is to only allow white-listed applications to run on workstations. When the malware attempts to execute you are prompted whether or not you want to allow it to run. This is great except humans inevitably make mistakes and some malware will still run and get through defense. The other method of detecting this type of malware is through the use of advanced analytics that look for trends within (and on the inside) of an organization.
Regardless of the malware that is executed, it has to send data outside the organization to be effective. Writing custom analytics to detect data leaving the network is one sure fire way to detect processes sending data that shouldn't be. Netflow is a good resource for building analytics as well as other methods such as anomaly detection algorithms.
How does Jigsaw Security protect MSS customers against this threat?
Since it's not practical to white-list every single executable on all workstations unless you only allow applications to run from a central trusted location, you have to essentially do the same thing with the network. Jigsaw Security employs zone detection to pickup on traffic leaving trusted zones for untrusted locations. We won't go into the specifics of this method but basically you have to know where your end users are authorized to send data and then alert to any instances where data is being sent to untrusted zones or locations. Another method is to analyze the content. The FirstWatch sensor can send resets when encrypted data streams are leaving for untrusted locations as an example. This eliminates the threat of a workstation sending encrypted data to an untrusted zone.
Analytic models must detect that there is something occurring that should not be happening. For instance if your competitors IP address space is built into an analytic model, you could tell your sensors to send resets when data is observed going to a competitors network. This is a basic explanation of an example but you get the concept of what you should be looking for. By studying what traffic is occurring and building your protections around that, your network environment will be more secure and it won't be using signatures to do it.
Blocking UDP unless needed is a good way of controlling your network. We realize that it's not practical to do this in most networks but you can highly regulate what UDP traffic goes in or out of your network. In addition blocking other protocols that are not authorized by policy or allowed is another good method. Then you can monitor the TCP connections to ensure that only trusted endpoints are allowed to communicate with your internal network.
Over reliance on outdated technologies
In the industry it is quite apparent that companies are relying on outdated technology. Our protection methods are not evolving as quickly as the threat attackers that we are defending against. Threat actors have been known to use non RFC compliant protocols to sneak data out DNS ports as an example since blocking DNS would cripple the network. This is why we don't allow computers to reach out to public DNS servers directly and force them through DNS servers that are local, under our control and are trusted. The days of using your upstream ISP's DNS is over or it should be.
The other method is to use analytic products that have the ability to process data in near real time so that you can make a decision in the time it takes for something to occur. Most of the products out there today simply allow everything to flow and then detect the problem after the fact. This is what makes our products different.
We need to employ good operational security on our networks and with FirstWatch we make it harder for threat actors to take advantage of the openness of the Internet.