METHOD AND SYSTEM FOR DETECTING MALICIOUS PAYLOADS

- Vectra Networks, Inc.

Disclosed is an improved method, system, and computer program product for identifying malicious payloads. The disclosed approach identifies potentially malicious payload exchanges which may be associated with payload injection or root-kit magic key usage.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
BACKGROUND

In recent years, it has become increasingly difficult to detect malicious activity carried on networks. The sophistication of intrusions has increased substantially, as entities with greater resources, such as organized crime and state actors, have directed resources towards developing new modes of attacking networks.

For example, in enterprise networks it is critical to restrict user access to servers located inside data centers. One pernicious type of intrusion pertains to the situation when an outside entity attempts to deliver a malicious payload onto a host within an internal network.

With conventional solutions, perimeter security tools such as stateful firewalls are used to define explicit rules about what traffic is allowed to be sent and received from external servers. While these rule-based approaches are able to deter some attackers that send malicious payload that match one of the pre-defined rule conditions, the problem is that these rules can be circumvented merely by using a payload delivery method that does not match any of the existing rules. With the constant evolution of the tools and approaches that are used by attackers, this means that the conventional approaches to implement firewalls are almost certain to include rules that can be circumvented by new approaches used by at least some attackers.

As is evident, there is a great need for approaches that effectively and efficiently identify malicious payloads being delivered to a host on a network.

SUMMARY

Some embodiments provide an improved method, system, and computer program product for identifying malicious payloads. The embodiments of the invention can identify potentially malicious payload exchanges which may be associated with payload injection or rootkit magic key usage. The disclosed invention can expose when a network is undergoing a targeted network attack.

Other additional objects, features, and advantages of the invention are described in the detailed description, figures, and claims.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1A illustrates an example environment in which a detection engine may be implemented to perform detection of malicious payload as according to some embodiments.

FIG. 1B shows a more detailed view of the detection engine, at which network traffic is examined to learn the behavior of various activities within the network and to detect anomalies from the normal behavior.

FIG. 2 shows a flowchart of an approach to implement the learning process according to some embodiments of the invention.

FIG. 3 shows a flowchart of an approach to implement the detection process according to some embodiments of the invention.

FIG. 4 shows a flowchart of an approach to implement an updating process according to some embodiments of the invention.

FIG. 5 is a block diagram of an illustrative computing system 1400 suitable for implementing an embodiment of the present invention for performing intrusion detection

DETAILED DESCRIPTION

Various embodiments of the methods, systems, and articles of manufacture will now be described in detail with reference to the drawings, which are provided as illustrative examples of the invention so as to enable those skilled in the art to practice the invention. Notably, the figures and the examples below are not meant to limit the scope of the present invention. Where certain elements of the present invention can be partially or fully implemented using known components (or methods or processes), only those portions of such known components (or methods or processes) that are necessary for an understanding of the present invention will be described, and the detailed descriptions of other portions of such known components (or methods or processes) will be omitted so as not to obscure the invention. Further, the present invention encompasses present and future known equivalents to the components referred to herein by way of illustration.

Various embodiments of the invention are directed to a method, system, and computer program product for detecting malicious payloads using unsupervised clustering.

FIG. 1A illustrates an example environment in which a detection engine 106 may be implemented to perform detection of malicious payloads as according to some embodiments. Here, an example network 102 comprises one or more hosts (e.g. assets, clients, computing entities), such as host entities 114, 116, and 118, that may communicate with one another through one or more network devices, such as a network switch 109. The network 102 may communicate with external networks through one or more network border devices as are known in the art, such as a firewall 103. For instance, host 114 may communicate with an external server on node 108 through network protocols such as ICMP, TCP and UDP.

However, it is possible that the external server 108 is an attacker node that seeks to send malicious payloads to the host 114. By way of example, two methods which an attacker may choose to penetrate inside a network through the firewall 103, while undetected, are the direct payload injection approach and the backdoor using kernel rootkit approach.

In the case of direct payload injection, the attacker node 108 sends a uniquely marked payload to targeted machines (such as host 114) and instructs the machine to execute the given code (e.g., which can open a separate connection back to a machine controlled by the attacker). This results in full access to the targeted machine allowing a direct path for exfiltration or subsequent lateral movement. The detection of such payloads is extremely difficult as the payloads can be sent over any open port and appear as connections with no response from the targeted machine.

Another possible approach is for an attacker to create a rootkit back-door for future access to the targeted machine. The back-door will parse all incoming traffic listening for a specific payload corresponding to a “magic key.” When the root-kit parses the attacker's payload, the server gives a compliant response and allows the attacker full access to the targeted machine. Just as with the code injection approach, an attacker in the rootkit back-door approach can send its payload to any open port. Attackers can use ports with active services to disguise their actions. The detection of such payloads is extremely difficult as the payloads and appear as connections with near zero-length requests with a benign response or no response from the targeted machine.

These payloads, if left undetected, allow attackers to persist inside a network even when they have been identified and removed from other machines.

As described in more detail below, the detection engine 106 operates by performing unsupervised machine learning using data about the network traffic within the network 102. The detection engine 106 operates to identify malicious payloads used by attackers by considering their anomalous behavior relative to normal traffic over a machine's active ports. In this manner, the disclosed invention provides an approach to identify, in real time, the use of malicious payloads, such as for example, those used in rootkit or code injection attacks inside enterprise networks.

The advantage of this approach is that it does not require a set of pre-defined rules to be configured and constantly updated to identify the presence of malicious payloads. Moreover, since the invention operates by learning the behavior of the network, this means that the detection system is effective even when the attackers are constantly evolving new attack approaches and techniques.

The detection engine 106 may be configured to monitor or tap the network switch 106 to passively analyze the internal network traffic in a way that does not hamper or slow down network traffic (e.g. by creating a copy of the network traffic for analysis). In some embodiments, the detection engine 106 is an external module or physical computer that is coupled to the switch 106. While in some embodiments, the detection engine 106 may be directly integrated as an executable set of instructions into network components, such as the switch 106 or a firewall 103. While still, in some embodiments, the detection engine 106 may be integrated into one or more hosts in a distributed fashion (e.g. each host may have its own set copy of the distributed instructions and the hosts collectively agree to follow or adhere to instructions per protocol to collect and analyze network traffic). In some embodiments, the detection engine 106 can be implemented within one or more virtual machine(s) or containers (e.g., operating system level virtualized-application containers, such as Docker containers and/or LXC containers) sitting on one or more physical hosts. Still in some embodiments, the detection engine 106 may be integrated into a single host that performs monitoring actions for the network 102.

In the illustrated environment, the hosts may connect to one another using different network communication protocols such as ICMP, TCP or UDP. The detection engine 106 may be configured to work as a passive analysis device that can receive, store, and analyze all network traffic sent/received by a single host, or a multitude of hosts. In some embodiments, all network communications may be passed through the switch 106 and the detection engine 106 may tap or span (TAP/SPAN) the switch 106 to access and create a copy of the network communications; for example, by performing a packet capture and saving the network communications in one or more packet capture files. Once received, the network communications may be parsed into flows that correspond to sessions.

FIG. 1B shows a more detailed view of the detection engine, at which network traffic is examined to learn the behavior of various activities within the network and to detect anomalies from the normal behavior. This provides an effective way of determining whether or not communication to a host is hiding a payload injection or rootkit magic key.

In some embodiments, the detection engine performs the following tasks for each machine in the network:

    • Perform metadata extraction of the client-server sessions
    • Build clusters of representative sequences for the payloads sent from clients and servers following established connection for every active destination port in the network for a defined learning phase.
    • Flag client-server sequences which differ significantly from baselined behavior after the learning phase
    • Continuously identify new normal client-server sequences after the learning phase

At module 130, data from the network traffic is collected, and metadata processing is performed upon that data. In particular, every internal network communication session is processed through a parsing module and a set of metadata is extracted. This collected metadata contains information about the source and destination machines, whether the connection was successfully established, the amount of data transferred, the connection's duration, and the client and server's first payloads following normal connection establishing exchanges.

Therefore, some examples of data to be collected for the network traffic could include:

    • Information for clients and servers, such as IP address and host information
    • Payloads for both clients and servers
    • Amount of data being transferred
    • Duration of communications
    • Length of time delay between client request and server response

At module 132, the metadata derived from the network communications is examined to learn the behavior of the network traffic. In some embodiments, this is accomplished by deriving abstractions of the client-server handshakes following the establishment of a normal connections between clients and servers. These abstractions are then used as vectors for storing baseline behavior models of the clients' activities and the servers' activities. In this way, the learning module creates a uniform and protocol-agnostic model for identifying known and future malicious payloads. Further details regarding an approach to implement the learning process is described below in conjunction with the description of FIG. 2.

At module 134, the system performs detection of suspicious activity with regards to payload deliveries. The metadata for network traffic is analyzed against the models developed by the learning module to identify anomalies from the normal, baseline behavior. The detection analysis may be performed either in real-time or on an asynchronous basis. Further details regarding an approach to perform detection analysis is described below in conjunction with the description of FIG. 3.

At module 136, the system may update the models to reflect updated communications patterns within the network. Further details regarding an approach to perform the updating process is described below in conjunction with the description of FIG. 4.

Any of the data used or created within this system, such as the collected metadata, the baseline behavior models, and/or the analysis results, may be stored within a computer readable storage medium. The computer readable storage medium includes any combination of hardware and/or software that allows for ready access to the data that is located at the computer readable storage medium. For example, the computer readable storage medium could be implemented as computer memory and/or hard drive storage operatively managed by an operating system, and/or remote storage in a networked storage device. The computer readable storage medium could also be implemented as an electronic database system having storage on persistent and/or non-persistent storage.

FIG. 2 shows a flowchart of an approach to implement the learning process according to some embodiments of the invention. This process is used to identify the normal client and server payloads and representative sequences for each, on all active destination ports.

An unsupervised clustering algorithm is applied to identify representative sequences for a given combination of parameters. For example, in some embodiments, the learning is performed on the basis of each unique combination of a server and port. Therefore, for each server/port combination, the learning process will identify the “normal” payload characteristics, e.g., for a client payload and a server payload. This provides a baseline model of the payload characteristics that can later be used to check whether there are deviations from that normal behavior.

While the current example approach is used to process the data on the basis of the client and server payloads for every unique machine and destination port in the network, it is noted that other combinations may be utilized in alternate embodiments of the invention. For example, the inventive concepts are also applicable to combinations such as client/server/port combinations, client/server combination, client/port combinations, or just ports by themselves.

For each server/port combination, the process at 202 receives the communications to be analyzed. In some embodiments, the analysis is performed on a session basis, e.g., where the analysis is performed on the collection of packets for an entire TCP conversation. It is noted that the inventive concepts may also be applied to other protocols and granularities of network traffic as well.

An abstraction is generated of the client-server handshakes for the session. The client-server handshake is represented as a sequence of bytes, where each sequence corresponds to the beginning bytes of the respective client or server payload. In this way, only the first n bytes of the communications need to be analyzed to classify and analyze the communications. Alternatively, the entirety of the communications may be analyzed.

The normal sequences can then be learned over the course of a specified number of sessions and time duration. During this time, new client or server sequences are added when they differ from previously identified sequences more than a defined similarity score.

At 204, a determination is made whether the sequence under examination is for the first communications to be analyzed. If so, then in some embodiments, this first sequence is stored as the baseline pattern at 210.

The process then returns back to 202 to select the next communications to analyze. For the next communications, since it is not the first communications, the process proceeds to 206 to compare against the one or more baseline patterns.

A similarity score can be calculated to perform the comparison. In some embodiments, the similarity score is calculated using a Hamming distance between two sequences with a bias towards sequential equivalences.

If the similarity is not within a threshold distance, then the new sequence is assumed not to cluster with the previously identified baseline pattern(s). Therefore, at 210, the sequence is stored as a new baseline pattern. In some embodiments, in order to ensure that a converged normal state is reached, a minimum number of sessions and time must pass without identifying sequences as new baseline patterns.

If the similarity is within the threshold distance, then the new sequence is assumed to cluster with the previously identified baseline pattern(s). Therefore, no new baseline is added for this situation. Instead, the process will move to 212 to check whether there are more communications to process.

If there are further communications to analyze, then the process returns back to 202 to receive the next communications. If there are no further communications to analyze, then the process ends the learning phase at 214.

FIG. 3 shows a flowchart of an approach to implement the detection process according to some embodiments of the invention. This process is used to flag client-server payload representations which differ significantly from the baselined behavior(s).

At 302, the communications to be analyzed are received. As previously noted, the analysis is performed on a session basis. A sequence of bytes for the payload is extracted for the detection analysis process.

At 304, a comparison is made between the extracted sequence of the current communications and the patterns that have been saved for the baseline behaviors. In particular, the client and server payloads in all sessions following the learning period are scored relative to the learned normal behavior. As before, a Hamming distance can be used to calculate the score.

A determination is made at 306 whether the calculated score is within a defined threshold. If the pair of client-server sequences differ more than the specified similarity score, then the session containing the exchange is flagged as a potentially malicious payload at 308.

If there are further communications to analyze (310), then the process returns back to 302 to receive the next communications. If there are no further communications to analyze, then the process ends the detection phase.

FIG. 4 shows a flowchart of an approach to implement an updating process according to some embodiments of the invention. This process is used to identify new normal client and server payloads. It is noted that this process is employed since new client and server payloads can be learned by the system even after the learning period.

At 402, the communications to analyze are received. As previously noted, the analysis is performed on a session basis for server/port combinations. A sequence of bytes for the payload is extracted for the detection analysis process.

At 404, over a given period of time, a determination is made whether there is a new sequence having a score that is below the similarity threshold that is consistent over that period of time.

At 406, if only one of the client or server sequences is scored below the similarity threshold but the other is above, the new sequence is added as normal behavior. If pairs of client-server sequences are flagged more than a specified number of times as malicious, they are learned as normal behavior for the specific port.

Therefore, what has been described is an improved method, system, and computer program product for identifying malicious payloads. The disclosed invention can identify potentially malicious payload exchanges which may be associated with payload injection or root-kit magic key usage. The disclosed invention can expose when a network is undergoing a targeted network attack.

System Architecture Overview

FIG. 5 is a block diagram of an illustrative computing system 1400 suitable for implementing an embodiment of the present invention for performing intrusion detection. Computer system 1400 includes a bus 1406 or other communication mechanism for communicating information, which interconnects subsystems and devices, such as processor 1407, system memory 1408 (e.g., RAM), static storage device 1409 (e.g., ROM), disk drive 1410 (e.g., magnetic or optical), communication interface 1414 (e.g., modem or Ethernet card), display 1411 (e.g., CRT or LCD), input device 1412 (e.g., keyboard), and cursor control. A database 1432 may be accessed in a storage medium using a data interface 1433.

According to one embodiment of the invention, computer system 1400 performs specific operations by processor 1407 executing one or more sequences of one or more instructions contained in system memory 1408. Such instructions may be read into system memory 1408 from another computer readable/usable medium, such as static storage device 1409 or disk drive 1410. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and/or software. In one embodiment, the term “logic” shall mean any combination of software or hardware that is used to implement all or part of the invention.

The term “computer readable medium” or “computer usable medium” as used herein refers to any medium that participates in providing instructions to processor 1407 for execution. Such a medium may take many forms, including but not limited to, non-volatile media and volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as disk drive 1410. Volatile media includes dynamic memory, such as system memory 1408.

Common forms of computer readable media includes, for example, floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, or any other medium from which a computer can read.

In an embodiment of the invention, execution of the sequences of instructions to practice the invention is performed by a single computer system 1400. According to other embodiments of the invention, two or more computer systems 1400 coupled by communication link 1415 (e.g., LAN, PTSN, or wireless network) may perform the sequence of instructions required to practice the invention in coordination with one another.

Computer system 1400 may transmit and receive messages, data, and instructions, including program, i.e., application code, through communication link 1415 and communication interface 1414. Received program code may be executed by processor 1407 as it is received, and/or stored in disk drive 1410, or other non-volatile storage for later execution.

In the foregoing specification, the invention has been described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention. For example, the above-described process flows are described with reference to a particular ordering of process actions. However, the ordering of many of the described process actions may be changed without affecting the scope or operation of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than restrictive sense.

Claims

1. A method for detecting a malicious network payload, comprising:

collecting network traffic corresponding to communications within a network;
extracting metadata from the network traffic for a client-server session;
in a learning phase, identifying from the metadata any representative sequences for payloads sent between a client and a server for the client-server session; and
in a monitoring phase, detecting for any client-server sequences corresponding to a payload that are different from the representative sequences identified in the learning phase.

2. The method of claim 1, wherein the representative sequences are identified on the basis of a unique combination of the server and a port, where baseline payload characteristics are identified for the unique combination of the server and the port.

3. The method of claim 2, further comprising:

receiving a first communication for the unique combination of the server and the port;
setting the baseline payload characteristics in correspondence to the first communications;
for an additional communication, comparing the additional communication against the baseline payload characteristics to obtain a similarity score, and identifying new baseline payload characteristics that corresponds to the additional communication if the similarity score is not within a threshold distance and a minimum number of sessions or time has elapsed.

4. The method of claim 1, wherein identification of the representative sequences in the learning phase is performed by constructing one or more models corresponding to baseline behavior for the client and the server.

5. The method of claim 4, wherein new normal client-server sequences are identified even after an initial learning phase by updating the one or more model.

6. The method of claim 1, wherein the monitoring phase is implemented by:

receiving a new communication for analysis;
comparing an extracted sequence for the new communication against the representative sequences identified in the learning phase to generate a score;
determining if the score exceeds a threshold; and
identifying the new communication as a potential threat for malicious payload if the score exceeds the threshold.

7. The method of claim 1, wherein the metadata extracted from the network traffic comprises some or more of IP address information for the client and the server, payload information, quantity information for transferred data, duration of communications, or length of time delay between a client request and a server response.

8. A system for detecting a malicious network payload, comprising:

a processor;
a memory for holding programmable code; and
wherein the programmable code includes instructions collecting network traffic corresponding to communications within a network; extracting metadata from the network traffic for a client-server session; in a learning phase, identifying from the metadata any representative sequences for payloads sent between a client and a server for the client-server session; and in a monitoring phase, detecting for any client-server sequences corresponding to a payload that are different from the representative sequences identified in the learning phase.

9. The system of claim 7, wherein the representative sequences are identified on the basis of a unique combination of the server and a port, where baseline payload characteristics are identified for the unique combination of the server and the port.

10. The system of claim 9, wherein the programmable code further includes instructions for:

receiving a first communication for the unique combination of the server and the port;
setting the baseline payload characteristics in correspondence to the first communications;
for an additional communication, comparing the additional communication against the baseline payload characteristics to obtain a similarity score, and identifying new baseline payload characteristics that corresponds to the additional communication if the similarity score is not within a threshold distance and a minimum number of sessions or time has elapsed.

11. The system of claim 7, wherein identification of the representative sequences in the learning phase is performed by constructing one or more models corresponding to baseline behavior for the client and the server.

12. The system of claim 11, wherein new normal client-server sequences are identified even after an initial learning phase by updating the one or more model.

13. The system of claim 7, wherein the monitoring phase is implemented by:

receiving a new communication for analysis;
comparing an extracted sequence for the new communication against the representative sequences identified in the learning phase to generate a score;
determining if the score exceeds a threshold; and
identifying the new communication as a potential threat for malicious payload if the score exceeds the threshold.

14. The system of claim 7, wherein the metadata extracted from the network traffic comprises some or more of IP address information for the client and the server, payload information, quantity information for transferred data, duration of communications, or length of time delay between a client request and a server response.

15. A computer program product embodied on a computer readable medium, the computer readable medium having stored thereon a sequence of instructions which, when executed by a processor, executes a method detecting a malicious network payload, comprising:

collecting network traffic corresponding to communications within a network;
extracting metadata from the network traffic for a client-server session;
in a learning phase, identifying from the metadata any representative sequences for payloads sent between a client and a server for the client-server session; and
in a monitoring phase, detecting for any client-server sequences corresponding to a payload that are different from the representative sequences identified in the learning phase.

16. The computer program product of claim 15, wherein the representative sequences are identified on the basis of a unique combination of the server and a port, where baseline payload characteristics are identified for the unique combination of the server and the port.

17. The computer program product of claim 16, wherein the sequence of instructions which, when executed by a processor, further performs:

receiving a first communication for the unique combination of the server and the port;
setting the baseline payload characteristics in correspondence to the first communications;
for an additional communication, comparing the additional communication against the baseline payload characteristics to obtain a similarity score, and identifying new baseline payload characteristics that corresponds to the additional communication if the similarity score is not within a threshold distance and a minimum number of sessions or time has elapsed.

18. The computer program product of claim 15, wherein identification of the representative sequences in the learning phase is performed by constructing one or more models corresponding to baseline behavior for the client and the server.

19. The computer program product of claim 18, wherein new normal client-server sequences are identified even after an initial learning phase by updating the one or more model.

20. The computer program product of claim 15, wherein the monitoring phase is implemented by:

receiving a new communication for analysis;
comparing an extracted sequence for the new communication against the representative sequences identified in the learning phase to generate a score;
determining if the score exceeds a threshold; and
identifying the new communication as a potential threat for malicious payload if the score exceeds the threshold.

21. The computer program product of claim 15, wherein the metadata extracted from the network traffic comprises some or more of IP address information for the client and the server, payload information, quantity information for transferred data, duration of communications, or length of time delay between a client request and a server response.

Patent History
Publication number: 20180077178
Type: Application
Filed: Sep 12, 2017
Publication Date: Mar 15, 2018
Applicant: Vectra Networks, Inc. (San Jose, CA)
Inventors: Nicolas Beauchesne (Miami Beach, FL), John Steven Mancini (San Jose, CA)
Application Number: 15/702,507
Classifications
International Classification: H04L 29/06 (20060101); G06N 99/00 (20060101); G06N 5/02 (20060101);