Cyber-security system and method for detecting escalation of privileges within an access token
According to one embodiment, a method detecting and mitigating a privilege escalation attack on an electronic device is described. The method involves operations by a user agent mode operating within a user space and a kernel driver mode operating within a kernel space. The kernel driver mode, in response to detecting an initial activation of a process being monitored, stores metadata associated with an access token. This metadata includes the initial token state information. Responsive to detecting an event associated with the process being monitored, the kernel mode driver extracts a portion of current state information for the access token for comparison to a portion of the stored token state information. Differences between content within the current state information and the stored token state information are used, at least in part, by the user agent mode to detect a privilege escalation attack.
Latest FireEye Security Holdings, Inc. Patents:
This application is based upon and claims the benefit of priority from U.S. Provisional Patent Application No. 62/690,835 filed Jun. 27, 2018, the entire contents of which are incorporated herein by reference.
FIELDEmbodiments of the disclosure relate to the field of cybersecurity. More specifically, one embodiment of the disclosure relates to a token analysis system to detect privilege escalation that is symptomatic of a cybersecurity attack.
GENERAL BACKGROUNDCybersecurity attacks have become a pervasive problem for organizations as many electronic devices and other resources have been subjected to attack and compromise. A cybersecurity attack, also referred to as a “cyberattack,” may involve the infiltration of malicious software (e.g., malware) onto an electronic device, followed by the malware attempting to discretely gain access to sensitive information from a data store within the infected electronic device or within a resource accessible via the infected electronic device. Thereafter, the malware may attempt to alter, destroy, exfiltrate or render non-accessible the sensitive information from the infected electronic device or resource unbeknownst to the entity in control of that electronic device or resource.
Some types of cyberattacks, namely privilege escalation attacks, have become increasingly common and have led to the loss of sensitive data and compromises of electronic devices. In a privilege escalation attack, malware authors may subvert the use of an otherwise legitimate application or binary, running with proper privilege settings, by surreptitiously escalating (e.g., increasing) privileges within an access token. In many situations, privilege escalation attacks use a variation of token stealing, or token manipulation, to escalate the privileges within the access token. After obtaining escalated privileges, the malware may conduct nefarious actions that the electronic device would not be permitted without those privileges.
Herein, an “access token” is an object that contains a set of privileges (e.g., one or more privileges) that control access by a user to a resource available to an electronic device (e.g., endpoint, server, etc.) and control whether an instruction (or operation) may be executed. For certain types of resources, such as components managed by an operating system (OS), access controls may be applied where the OS ensuring that only authorized processes can utilize the resources. In response to an attempt by the user to access the resource, a portion of the OS deployed within the electronic device determines whether the user is permitted access to the resource. This determination is conducted by accessing content from the access token accompanying a resource request to determine (i) whether the user possesses the necessary privileges to access the resource and (ii) what degree or level of access to content maintained by the resource is available to the user. Hence, the OS protects the electronic device and/or resources from unauthorized accesses, and, thereby, from unauthorized operations (read, write/modify, etc.) not intended by the user.
The endpoint or network account control server may maintain access (privilege) tokens associated with its users. Threat actors use a variation of token stealing to increase system privileges (e.g., from lower levels of privilege to administrator levels), whereby the threat actor may be provided access to privileged resources (e.g., protected files). By way of example of privilege level, Intel® X86 architectures running Windows® operating systems have typically four levels of privilege, with Level #0 being the highest privilege level with the broadest access rights and Level #3 being the lowest with narrow access rights. An electronic device operator may have a low privilege level, while a system administrator responsible for managing hundreds of such electronic devices, including that of the operator, may have a high privilege level. Examples of various privileges include permission to CREATE a file in a directory, or to READ or DELETE a file, access an external device, or to READ or WRITE to a socket for communicating over the Internet.
Previous systems for determining token stealing, such as pre-configured hypervisors for example, are resource intensive. Hence, hypervisors may cause user operated endpoints (or other electronic devices with constrained resources) to experience unacceptable data processing latencies, which adversely effects the user's overall experience in using the endpoint (or electronic device).
Embodiments of the disclosure are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
Embodiments of the disclosure relate to a token analysis system and method for detecting privilege escalation within an access token and determining whether the privilege escalation is part of a cyberattack. This particular type of cyberattack is referred to as a “privilege escalation attack.” According to one embodiment of the disclosure, the computerized method starts with the storing of metadata for an access token in response to detecting a launch of a process being monitored, where the metadata includes token state information. Herein, for this embodiment, the token state information may include (i) an identifier of the user to which the access token pertains (e.g., user identifier), and (ii) a pointer to the access token data structure including a set of privileges provided to the user from the access token. This stored token state information is then available as a baseline for comparison purposes, as described below. Hence, the stored token state information, being a stored copy of contents of the access token, may also be referred to as a “baseline token snapshot.”
It is noted that, for some operating environments, the token analysis system may maintain separate copies of baseline token snapshots (stored token state information) on aper-process basis. Alternatively, the token analysis system may maintain a single copy of the baseline token snapshot on a per token basis, which may be referenced by pointers utilized by processes in other operating environments. Stated differently, multiple baseline token snapshots corresponding to different processes may be used, or alternatively, a single baseline token snapshot is used, but pointers to the same (for all processes) baseline token snapshot may be used.
More specifically, as described below, different token storage schemes may be deployed to ensure that the baseline token snapshot (i.e., stored token state information) is trusted. As a first token storage scheme, a process runs in the context of a user, and thus, the process is assigned an access privilege token at launch. Additionally, during launch, a baseline token snapshot is generated. At this processing stage, the baseline token snapshot is considered to be “trusted” because the access token is not yet being used in accessing resources. The content within the current access token (hereinafter, “current token state information”) may be analyzed periodically or aperiodically during execution of the process to ensure that no unauthorized changes to the access token have been made. Such analysis may include retrieval of at least a portion of the current token state information for comparison with at least a portion of the stored token state information corresponding to the baseline token snapshot.
In contrast, as a second token storage scheme, the baseline token snapshot is generated prior to launch of the process. Herein, the baseline token snapshot may be generated during or in response to a condition in which the user gains control to an electronic device. For instance, as an illustrative example, the baseline token snapshot may be generated during or in response to a log-on event by the user. As creation of the baseline token snapshot occurs prior to launch, and again before usage of the access token in gaining access to access-controlled resources, the baseline token snapshot is considered “trusted”.
For both the first and second token storage schemes, subsequent (substitute or revised) baseline token snapshots may be generated periodically or upon the occurrence of certain captured events, e.g., when a legitimate request for change to access privileges is requested or granted. For example, access token privileges may be analyzed periodically or aperiodically during execution of a process. The analysis includes the retrieval of the content associated with the current access token (e.g., access privileges, user identifier, etc.) for comparison with corresponding content from the baseline token snapshot (also referred to as the “stored token state information”).
Operating system functions (or other function calls) may be executed to change the access token legitimately, and if the access token is being changed using a legitimate function, the baseline token snapshot is revised to produce a new baseline token snapshot for later comparison with access token content. Otherwise, the method continues by detecting a privilege escalation attack, if underway, in response to a variation between the current token state information and the stored token state information and no legitimate functions are not called, which denotes a potential privilege escalation attack. The retrieval and comparison of the content of the current access token with the stored token state information (baseline token snapshot) may occur, in some embodiments for example, at the time of a file access event or other event.
Upon detecting a privilege escalation attack, the token analysis system may be adapted to terminate the malicious process and/or initiate an alert to an endpoint operator and/or, etc.) to nan administrator of a detected escalation attack (e.g., displayed warning, text message, email message, or automated phone call). Optionally, the event analysis logic within the token analysis system may access the access token data store maintained within the kernel space to reset one or more privileges associated with the compromised access token to its intended value, as stored prior to the attack.
More specifically, and according to one embodiment of the disclosure, the token analysis system includes software running in the user space and software running in the kernel space within memory of an electronic device. The “kernel space” is a portion of memory reserved for running a privileged operating system (OS) kernel, kernel extensions, and most device drivers. In contrast, “user space” is a portion of memory where application software executes outside of the kernel space. The token analysis system includes a first component operating in the user space and a second component operating in the kernel space of the electronic device.
Herein, according to one embodiment of the disclosure, the first component includes analysis logic that operates in accordance with a plurality of rules to determine whether a detected privilege escalation is malicious, suspicious or benign. The first component further provides a listing of processes to be monitored. Stored in the user space, the monitored process listing may be modifiable, where listed processes may be added, changed or removed by a security administrator via an input/output (I/O) interface or automatically by software running on the first component updating the listed processes based on white list and black list processes as described below. The addition, change and/or removal of a monitored process may depend, at least in part, on the current threat landscape to which the electronic device is exposed. The current threat landscape may be learned from threat intelligence, including prior privilege escalation attacks detected by the electronic device or by external sources such as (i) another electronic device communicatively coupled to the electronic device or (ii) a management system monitoring operations of the electronic device and/or other electronic devices within an enterprise. Additionally, or in the alternative, the threat intelligence may further include information gathered by a third party source (referred to as “third-party data”).
The second component includes event analysis logic, which is configured to (i) monitor for certain processes to obtain the initial access token associated with each process, (ii) monitor for certain events associated with the monitored processes identified by the monitored process listing, and (iii) detect changes to access tokens that modify the level of access to resources. In general, an “event” refers to an operation, task or activity that is performed by a process running on the electronic device. The selection of events to be monitored may be based on experiential knowledge and machine learning. In some situations, the event may be undesired or unexpected, indicating a potential cyberattack is being attempted. Examples of general events that are more susceptible to a cyberattack (and tend to be some of the monitored processes) may include, but are not limited or restricted to a file operation, registry operation, and/or thread creation, as described below.
At start-up for each monitored process (e.g., launch), metadata for an access token that specifies the user and access privileges conferred to that monitored process may be extracted and stored in memory accessible to the second component. The metadata may include and/or provide access to (i) a pointer to the access token data structure and/or (ii) contents of the access token (e.g., user identifier and/or access privileges) for later analysis. As described below, the metadata may be referred to as “stored token state information.” Thereafter, in response to detecting one or more selected events during operation of the monitored process, the current token state information (e.g., any token information suitable for use in determining a change in content advantageous in privilege escalation such as token pointer, user identifier, and/or any access privilege) is extracted by the second component and compared to corresponding content of the stored token state information. Privilege escalation is detected based on differences between the current token state information and the stored token state information. Given that not all privileged escalations are malicious, in some situations, the second component may resort to light-weight heuristics or another detection measure to determine whether a privilege escalation is part of a cyberattack, as described below.
Upon detection of privilege escalation and the privilege escalation is determined to be malicious, the first component may cause remedial action to be taken. The first component may cause the processor to terminate the monitored process (and perhaps all processes) to prevent any further malicious activities on the endpoint. Additionally, or in the alternative, the first component may generate an alert to the user or an administrator, reset the access token to its original privilege levels, and quarantine the object that caused the execution of the malicious process by placement of the object in an isolated segment of memory for subsequent analysis and deletion. As used herein, the term “quarantine” may refer to a temporary or permanent halting in processing of the object (e.g., a file, an application or other binary, a Uniform Resource Locator “URL,” etc.) initiated by the first component.
More specifically, the token analysis system monitors the operations associated with selected processes. In some embodiments, depending on the availability of endpoint resources, these monitored processes may be a limited subset of those processes running on the endpoint to avoid negatively affecting user experience on the endpoint. However, in some situations, the monitored processes may constitute all of the processes running on the electronic device. The selection of which processes to monitor may be based on a whitelist of typically benign processes and/or a blacklist of processes that are more commonly subverted to malicious activities found in the current threat landscape. The whitelist and blacklist constitute threat intelligence, which may be downloaded into the token analysis system or simply made available to (and accessible by) the token analysis system.
As described above, the token analysis system captures privilege escalations associated with the monitored processes and evaluates those changes to determine whether they are likely part of a privilege escalation attack. According to one embodiment of the disclosure, the first component may correspond to a software agent deployed within the user space of an electronic device (hereinafter, “user mode agent”), which is configured to determine whether a detected privilege escalation is part of a privilege escalation attack. Furthermore, the second component may correspond to a software driver deployed within the kernel space of the electronic device (hereinafter, “kernel mode driver”), which is configured to capture OS notifications directed to accesses of privileged resources and detect the privilege escalation.
For example, according to one embodiment of the disclosure, the kernel mode driver may receive one or more response messages (e.g., callback) directed to one or more requests (e.g., captured or hooked events) associated with monitored processes that are seeking access to a resource using privileges maintained within an access token. The user mode agent analyzes information associated with a privilege escalation detected by the kernel mode driver to determine a threat level of the privilege escalation (e.g., malicious, suspicious or benign). The user mode agent may be configured to issue an alert upon detecting a privileged escalation attack or request information associated with additional captured events produced by the monitored process involved in a suspicious privilege escalation in order to discern if the privilege escalation is malicious or benign.
To determine if a privilege escalation should be classified as malicious or not, the user mode agent may employ (run) heuristics (rules) with respect to information associated with the monitored process and/or event that attempted the privilege escalation. These privilege escalation rules may be tailored to the type of monitored process and further tailored to particular events associated with the monitored process (e.g., process create/terminate events, configuration change events, etc.). In some situations, the events may pertain to the monitored process and any “child” processes (at any tier) resulting therefrom. The heuristics may be used to validate events with monitored processes involved in the detected privilege escalation against a known set of benign and/or malicious events. If the heuristics indicate the events should be deemed malicious, the user mode agent may report a cyberattack is occurring and/or take remedial action. In some embodiments, the analysis of captured events may be conducted by a remote appliance, facility or cloud service rather than by the user mode agent on the endpoint.
It is noted that the determination whether the maliciousness of the monitored process can be based on privilege changes alone, or maliciousness may be based on privilege changes in conjunction with additional suspicious captured events, such as opening a socket for outbound communication after a privilege exception.
The monitored events may include software calls, such as Application Programming Interface (API) calls, system calls and the like. According to another embodiment, the user mode agent may capture (e.g., intercept or hook) any API calls issued by a monitored process, and the kernel mode driver may monitor OS notifications (in response to API calls) to the monitored process that may modify an access token, including privileges escalation. Hence, the user mode agent may determine whether a privilege escalation occurred based on events other than a legitimate API or software call, which may be indicative of maliciousness. In other words, while such APIs are often used to request, legitimately, a change in privilege, if monitoring of the token privileges identifies the change in privilege without use of such an API, the privilege escalation should be deemed at least suspicious of being for malicious purposes.
II. TerminologyIn the following description, certain terminology is used to describe various features of the invention. For example, each of the terms “logic” and “component” may be representative of hardware, firmware or software that is configured to perform one or more functions. As hardware, the logic (or component) may include circuitry having data processing and/or storage functionality. Examples of such circuitry may include, but are not limited or restricted to a hardware processor (e.g., microprocessor, one or more processor cores, a digital signal processor, a programmable gate array, a microcontroller, an application specific integrated circuit “ASIC”, etc.), a semiconductor memory, or combinatorial elements.
Additionally, or in the alternative, the logic (or component) may include software such as one or more processes, one or more instances, Application Programming Interface(s) (API), subroutine(s), function(s), applet(s), servlet(s), routine(s), source code, object code, shared library/dynamic link library (dll), or even one or more instructions. This software may be stored in any type of a suitable non-transitory storage medium, or transitory storage medium (e.g., electrical, optical, acoustical or other form of propagated signals such as carrier waves, infrared signals, or digital signals). Examples of a non-transitory storage medium may include, but are not limited or restricted to a programmable circuit; non-persistent storage such as volatile memory (e.g., any type of random access memory “RAM”); or persistent storage such as non-volatile memory (e.g., read-only memory “ROM”, power-backed RAM, flash memory, phase-change memory, etc.), a solid-state drive, hard disk drive, an optical disc drive, or a portable memory device. As firmware, the logic (or component) may be stored in persistent storage.
Herein, a “communication” generally refers to related data that is received, transmitted, or exchanged within a communication session. This data may be received, transmitted, or exchanged in the form a message, namely information in a prescribed format and transmitted in accordance with a suitable delivery protocol. A “message” may be in the form of one or more packets, frames, or any other series of bytes or bits having the prescribed format.
The term “computerized” generally represents that any corresponding operations are conducted by hardware in combination with software and/or firmware.
The term “agent” generally refers to a module of software installed on a target system (e.g., an endpoint) that enables a user (e.g., a human such as an operator of an electronic device, an administrator or an external computer system) to monitor and interact with the target system. Agents allow users to gather information about multiple aspects of the target system. In some embodiments, agents also permit users to remotely retrieve the captured events and select other contents of the target system's memory or hard drive, and could potentially be configured to modify its security rules, configuration information and select other content. The agent may be configured to either communicate over a computer network, or to read and write all relevant configuration information and acquired data to a computer storage medium, such as a hard drive or removable read/write media (USB key, etc.). In one embodiment, the agent may be built in a modular fashion. The ability to gather a particular piece of data from a target system (e.g. a list of running processes on the target system) is implemented as a discrete module of software and loaded by the agent. This allows for easy adaptation of the agent to different environments that have specific requirements for data collection.
According to one embodiment of the disclosure, the term “malware” may be broadly construed as any code, communication or activity that initiates or furthers a cyber-attack. Malware may prompt or cause unauthorized, anomalous, unintended and/or unwanted behaviors or operations constituting a security compromise of information infrastructure. For instance, malware may correspond to a type of malicious computer code that, as an illustrative example, executes an exploit to take advantage of a vulnerability in a network, electronic device or software, for example, to gain unauthorized access, harm or co-opt operation of an electronic device or misappropriate, modify or delete data. Alternatively, as another illustrative example, malware may correspond to information (e.g., executable code, script(s), data, command(s), etc.) that is designed to cause an electronic device to experience anomalous (unexpected or undesirable) behaviors. The anomalous behaviors may include a communication-based anomaly or an execution-based anomaly, which, for example, could (1) alter the functionality of an electronic device executing application software in an atypical manner; (2) alter the functionality of the electronic device executing that application software without any malicious intent; and/or (3) provide unwanted functionality which may be generally acceptable in another context.
The term “electronic device” may be construed as any computing system with the capability of processing data and/or connecting to a network. Such a network may be a public network such as the Internet or a private network such as a wireless data telecommunication network, wide area network, a type of local area network (LAN), or a combination of networks. Examples of an electronic device may include, but are not limited or restricted to, an endpoint (e.g., a laptop, a mobile phone, a tablet, a computer, a wearable such as a smartwatch, Google® Glass, health monitoring device, or the like), a standalone appliance, a server, a video game console, a set top box, a smart (networked) home appliance, a router or other intermediary communication device, a firewall, etc.
The term “transmission medium” may be construed as a physical or logical communication path between two or more electronic devices or between components within an electronic device. For instance, as a physical communication path, wired and/or wireless interconnects in the form of electrical wiring, optical fiber, cable, bus trace, or a wireless channel using radio frequency (RF) or infrared (IR), may be used. A logical communication path may simply represent a communication path between two or more electronic devices or between components within an electronic device.
The term “privilege level” refers to the delegated authority (permissions) of a user to cause (e.g., via one or more processes), an operation, task or an activity to be performed on an electronic device. A user obtains a grant of privileges by presenting credentials to a privilege-granting authority. This may be accomplished by the user logging onto a system with a username and password, and if the username and password supplied are approved, the user is granted privileges as described a certain level of privileges. Such operations or tasks are tagged with a privilege level required for them to be permitted to be performed (demanded privilege). When a task tries to access a resource, or execute a privileged instruction, the processor determines whether the user making the request has the demanded privilege and, if so, access is permitted; otherwise, a “protection fault” interrupt is typically generated. Accordingly, for malware to succeed in gaining access to protected (privileged) resources or otherwise executing privileged instructions, the malware often requires escalated privilege.
Finally, the terms “or” and “and/or” as used herein are to be interpreted as inclusive or meaning any one or any combination. Therefore, “A, B or C” or “A, B and/or C” mean “any of the following: A; B; C; A and B; A and C; B and C; A, B and C.” An exception to this definition will occur only when a combination of elements, functions, steps or acts are in some way inherently mutually exclusive.
As this invention is susceptible to embodiments of many different forms, it is intended that the present disclosure is to be considered as an example of the principles of the invention and not intended to limit the invention to the specific embodiments shown and described.
III. General ArchitectureReferring now to
As shown in
When a user-mode application (e.g., application 1601) is launched, the processor 120 creates a process 180 (instance) for that application 1601. During run-time, the process 180 performs certain events. As described above, an “event” generally refers to a task or an activity that is performed by a process initiated by a software component, such as application 1601 for example, running on the endpoint (virtual or real). In some situations, the event may be undesired or unexpected, indicating a potential cyberattack is being attempted. Examples of such events may include, but are not limited or restricted to a file operation (e.g., open file, close file, etc.), registry operation (e.g., change registry key, change registry value, etc.), thread creation, or the like.
For certain processes being monitored by the kernel mode driver 150, content associated with events produced by the monitored processes, namely current state information associated with an access token (i.e., current token state information), may be evaluated (e.g., compared) to previously stored (cached) state information associated with the access token (i.e. stored token state information) to determine whether the access token has been impermissibly altered. As described above, the stored token state information is then available as a baseline for comparison purposes, gathered during launching of the process (as described below) or during a pre-launch state (e.g., response to a log-on event, etc.). The current token state information may include an address pointer to a structure associated with the access token, a user identifier identifying the user associated with the access token, and/or access privilege parameters associated with the user.
The endpoint 100 includes one or more interfaces 180, which may include a network interface 185 and/or input/output (I/O) interfaces 190. According to this embodiment of the disclosure, these components are connected by a transmission medium 195, such as any type of interconnect (e.g., bus), are at least partially encased in a housing made entirely or partially of a rigid material (e.g., hardened plastic, metal, glass, composite, or any combination thereof). The housing protects these components from environmental conditions.
Referring to
Initially, according to one embodiment of the disclosure, the token analysis system 110 may be configured to monitor selected processes as to whether such processes are involved in a privilege escalation attack. As shown, the user mode agent 140 includes analysis logic 210, which is communicatively coupled to a data store 220 that maintains a monitored process listing 225. Herein, the data store 220 may be implemented as part of the user mode agent 140 or may be external from the user mode agent 140 and, for example, within or remote to (depending on the embodiment) the endpoint 100. According to this embodiment, the monitored process listing 225 identifies one or more processes that are selected to be monitored by the kernel mode driver 150 for privilege escalation. Herein, the processes within the monitored process listing 225 may be identified by specific process name, although other metadata may be used in identifying such processes (e.g., pointer to data structure of the process, etc.).
The monitored process listing 225 may be pre-loaded at installation of the token analysis system 110, and thereafter, the monitored processes within the listing 225 may be altered (e.g., added, removed, updated, etc.) from time to time. Hence, the data store 220 is communicatively coupled to receive information for updating the monitored process listing 225 via a network interface 185 or an I/O interface 190 as shown in
More specifically, as shown in
Thereafter, the kernel mode driver 150 subscribes to the Operating System (OS) of the electronic device 100 to receive information associated with events for each of the monitored processes of the process listing 225 (operation 3). In particular, the kernel mode driver 150 initiates a request message 235 via an Application Programming Interfaces (API) to the OS of the electronic device 100, which sets the OS to generate a response message (e.g., callback) to the kernel mode driver 150 in response to a monitored process being launched (operation 4) and certain subsequent events being performed while the monitored process is active (operations 8, 12 or 16). An example of one of the APIs may include PsSetCreateProcessNotifyRoutine. As a result, response messages (callbacks) for events performed by one or more monitored processes may be further received in response to selected events being attempted by the monitored process, such as a file operation, a registry operation, a thread creation, or another selected event being conducted during the monitored process.
For instance, as shown in
More specifically, the stored token state information 320 operates as a “baseline token snapshot” to capture a trusted copy of contents of the access token. For one embodiment, during launch of the monitored process the stored token state information (baseline token snapshot) 320 is generated. The stored token state information 320 is considered to be “trusted” because the access token has not yet being used in accessing resources. It is noted that, for some operating environments, the token analysis system 110 may maintain separate copies of stored token state information 320 on a per-process basis. Alternatively, the token analysis system 110 may maintain a single copy of the stored token state information 320 on a per token basis, which may be referenced by pointers utilized by processes in other operating environments. However, for another embodiment, in lieu of capturing the stored token state information during launching of the monitored process, the stored token state information 320 may be generated before such launching. For example, the stored token state information 320 (baseline token snapshot) may be generated during or in response to a condition in which the user gains control to an electronic device, such as during a user log-on process.
Independent as to when the “trusted” baseline token snapshot 320 is captured, subsequent (substitute or revised) baseline token snapshots may be generated periodically or upon the occurrence of certain captured events. One type of event may be in response to a legitimate request for change to access privileges is requested or granted. Hence, access token privileges may be analyzed periodically or aperiodically during execution of a process, and the stored token state information 320 may be updated accordingly.
Referring now to
Referring back to
As an illustrative example, as shown in
More specifically, in response to the content of the current pointer 274 being altered and different from the content of the pointer 300 of
Referring back to
Referring now to
Upon receipt of the detection message 400, the analysis logic 210 of the user mode agent 140 extracts the information 410, 420 and 430 and determines a threat level of the detected privilege escalation in accordance with the privilege escalation rules 460 that control operability of the analysis logic 210 (operation 21). According to one embodiment of the disclosure, the analysis logic 210 applies portions of the extracted information 410, 420 and/or 430 to the privilege escalation rules 460 to determine a threat level for the monitored process (operation 20). The threat level may be categorized as (1) “benign” (e.g., a confirmed legitimate privilege escalation); (2) “malicious” (e.g., a confirmed unauthorized privilege escalation associated with a privilege escalation attack); or (3) “suspicious” (e.g., an unauthorized privilege escalation but indeterminate of malicious intent).
Referring still to
Once the analysis logic 210, operating in accordance with the privilege escalation rules 460, determines that a detected privilege escalation represented by the detection message 400 is malicious, the user mode agent 140 may initiate an alert 470 to the endpoint user or an administrator as to the detection of a privilege escalation attack (operation 22). Furthermore, besides the alert 470, the user mode agent 140 may terminate and/or quarantine the malicious monitored process.
In contrast, where the analysis logic 210 determines that the detected privilege escalation is suspicious, the user mode agent 140 may initiate an event acquisition message 480 to the kernel mode driver 150 to acquire additional events associated with the monitored process for evaluation (operation 23). As the kernel mode agent 150 publishes the subsequent events (such as, for example, access using the escalated privilege to a privileged and highly sensitive file or attempted outbound transfer of data (exfiltration) from that file) to the user mode agent 140, the user mode agent 140, operating in accordance with the privilege escalation rules, may identify further suspicious or malicious activity. The suspicious determination (described above) along with these additional results may be weighted, and collectively, may prompt the analysis logic 210 of the user mode agent 140 to determine a malicious verdict or a benign verdict.
Referring to
In the event that the monitored process is a first process (i.e., a “parent” process) the kernel mode driver analyzes the current token state information associated with the access token for the created, monitored parent process (operations 525 and 550). However, where the monitored process is a “child” process, namely the monitored process is a secondary process created from another process, the kernel mode driver retrieves and analyzes the current token state information associated with the access token for the parent process (operations 530 and 540), which, by inherence, is also associated with the child process. If a privilege escalation is detected for the parent process, the detected privilege escalation of the child process is reported to the user mode agent (operation 545). However, if no privilege escalation is detected for the parent process, the kernel mode driver analyzes the current token state information associated with the access token for the created (child) process (operations 540 and 550), since the access token for the child process may have been changed by a user after created. If a privilege escalation is detected for the child process, the detected privilege escalation is reported to the user mode agent (operations 545 and 560). However, if no privilege escalation is detected for the monitored process, the kernel mode driver refrains from reporting information associated with the monitored process to the user model agent (operations 560 and 570).
In the foregoing description, the invention is described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims.
Claims
1. A computerized method for detecting and mitigating a privilege escalation attack on an electronic device, the method comprising:
- storing metadata for an access token, by a kernel mode driver, in response to an event occurring before or during a launch of a process being monitored, the stored metadata comprises stored token state information including content to provide access to at least a first set of privileges for a user associated with the access token;
- responsive to detecting an event associated with the process being monitored, extracting at least a portion of current token state information for the access token by the kernel mode driver and comparing at least the portion of the current token state information to at least a portion of the stored token state information;
- determining whether the access token has been changed legitimately by a software function that, upon execution, is permitted to change the access token;
- detecting a privilege escalation attack in response to a variation between at least the portion of the current token state information and at least the portion of the stored token state information when the access token has not been changed legitimately, wherein the detecting of the privilege escalation attack being conducted by a user mode agent in response to determining a prescribed threat level of a detected privilege escalation based on (i) a difference between a pointer to a data structure of the access token included in the current token state information and a pointer to the data structure of the access token included in the stored token state information, or (ii) a difference between a user identifier included in the current token state information and a user identifier included in the stored token state information, or (iii) a difference between a set of privileges associated with the access token obtained via the pointer to the data structure of the access token included in the current token state information and the first set of privileges; and
- revising the stored token state information when the access token has been changed legitimately.
2. The computerized method of claim 1, wherein the storing of the stored token state information is in response to the launch of the process being monitored.
3. The computerized method of claim 2, wherein prior to detecting the launch of the process, the method further comprising:
- receiving a listing of monitored processes by the kernel mode driver.
4. The computerized method of claim 3, wherein the listing of monitored processes is provided from the user mode agent to the kernel mode driver, the listing of monitored processes being modifiable depending on a current threat landscape to which the electronic device is exposed.
5. The computerized method of claim 3, wherein prior to detecting the launch activation of the process and after receiving the listing of monitored processes, the method further comprising:
- subscribing, by the kernel mode driver, to an operating system of the electronic device to receive information associated with events for one or more processes within the listing of the monitored processes.
6. The computerized method of claim 5, wherein prior to detecting the launch of the process and after receiving the listing of monitored processes, the method further comprising:
- issuing, by the kernel mode driver, a request message via an Application Programming Interface (API) to an operating system of the electronic device to receive information associated with events for a first process within the listing of the monitored processes; and
- receiving a response message from the operating system in response to the first process being launched.
7. The computerized method of claim 6, wherein the request message includes the stored token state information providing access to the first set of privileges for the user.
8. The computerized method of claim 6, wherein the response message includes the stored token state information that comprises a pointer to a data structure associated with the access token including the first set of privileges and an identifier of the user associated with the access token.
9. The computerized method of claim 8, wherein the detecting of the privilege escalation attack is based, at least in part, on detecting the pointer to the data structure associated with the access token being different from a pointer to a data structure for the access token provided by the extracted portion of the current token state information.
10. The computerized method of claim 8, wherein the detecting of the privilege escalation attack is based, at least in part, on detecting the identifier of the user included in the data structure associated with the access token being different from a user identifier within the data structure for the access token provided by the extracted portion of the current token state information.
11. The computerized method of claim 7, wherein the detecting of the privilege escalation attack is based on detected changes between (i) parameter values of the access token obtained from the portion of current token state information for the access token and (ii) corresponding parameter values from the first set of privileges for the user associated with the access token.
12. The computerized method of claim 1, wherein the user mode agent corresponds to a software agent deployed within a user space of the electronic device.
13. The computerized method of claim 1, wherein the detecting of the privilege escalation attack being conducted by the user mode agent in response to analysis of information within a message provided to the user mode agent by the kernel mode driver, the message includes (i) information that identifies the process being monitored, (ii) information that identifies the event, and (iii) information that identifies a change between the portion of the stored token state information and the portion of the current token state information.
14. The computerized method of claim 1, wherein the information that identifies the change between the portion of the stored token state information and the portion of the current token state information comprises (i) any difference between the pointer to the data structure of the access token included in the current token state information and the pointer to the data structure of the access token included in the stored token state information, (ii) any difference between the user identifier included in the current token state information and the user identifier included in the stored token state information, and (iii) any difference between the set of privileges associated with the access token obtained via the pointer to the data structure of the access token included in the current token state information and the first set of privileges.
15. The electronic device of claim 14, wherein the kernel mode driver, when executed by the processor, to detect the privilege escalation attack being based, at least in part, on detecting the pointer to the data structure associated with the access token being different than the pointer to the data structure for the access token provided by the extracted portion of the current token state information.
16. The electronic device of claim 14, wherein the kernel mode driver, when executed by the processor, to detect the privilege escalation attack is based, at least in part, on detecting the identifier of the user included in the data structure associated with the access token being different from a user identifier within a data structure for the access token provided by the extracted portion of the current token state information.
17. The electronic device of claim 14, wherein the user mode agent to detect the privilege escalation attack by at least detecting changes between (i) values of the access token obtained from the extracted portion of the current token state information and (ii) corresponding values from the stored token state information.
18. The electronic device of claim 14, wherein the user mode agent to detect the privilege escalation attack in response to at least determining a prescribed threat level of a detected privilege escalation based on (i) a difference between the pointer to a data structure of the access token included in the current state information and the pointer to the data structure of the access token included in the stored token state information, or (ii) a difference between the user identifier included in the current token state information and the user identifier included in the stored token state information.
19. The electronic device of claim 14, wherein the information that identifies the variation between at least the portion of the current token state information and at least the portion of the stored token state information comprises (i) any difference between the pointer to the data structure of the access token included in the current token state information and the pointer to the data structure of the access token included in the stored token state information, (ii) any difference between the user identifier included in the current token state information and the user identifier included in the stored token state information, and (iii) any difference between the set of privileges associated with the access token obtained via the pointer to the data structure of the access token included in the current token state information and the first set of privileges.
20. An electronic device, comprising:
- a processor; and
- a memory communicatively coupled to the processor, the memory includes a user mode agent and a kernel mode driver, wherein the kernel mode driver, when executed by the processor, to (i) store metadata for an access token prior to or during a launch of a process being monitored, the stored metadata comprises stored token state information including content to provide access to at least a first set of privileges for a user associated with the access token, (ii) extract at least a portion of current token state information for the access token in response to detecting an event associated with the process being monitored, (iii) compare at least the portion of the current token state information to at least a portion of the stored token state information, (iv) upon the compare identifies a change to at least the portion of the current token state information, determine whether the access token has been changed legitimately by a software function that, upon execution, is permitted to change the access token and revise the stored token state information for the access token if the access token has been changed legitimately, and the user mode agent, when executed by the processor, to detect a privilege escalation attack in response to a variation between at least the portion of the current token state information and at least the portion of the stored token state information, wherein the variation includes at least (i) a difference between a pointer to a data structure of the access token included in the current token state information and a pointer to the data structure of the access token included in the stored token state information, or (ii) a difference between a user identifier included in the current token state information and a user identifier included in the stored token state information, or (iii) a difference between a set of privileges associated with the access token obtained via the pointer to the data structure of the access token included in the current token state information and the first set of privileges.
21. The electronic device of claim 20, wherein the kernel mode driver being configured to receive a listing of monitored processes by the kernel mode driver.
22. The electronic device of claim 21, wherein user mode agent being configured to provide the listing of monitored processes to the kernel mode driver, the listing of monitored processes being modifiable depending on a current threat landscape to which the electronic device is exposed.
23. The electronic device of claim 21, wherein the kernel mode driver to subscribe to an operating system of the electronic device to receive information associated with events for one or more processes within the listing of the monitored processes.
24. The electronic device of claim 23, wherein the kernel mode driver to issue a request message via an Application Programming Interface (API) to an operating system of the electronic device to receive information associated with events for a first process within the listing of the monitored processes and receive a response message from the operating system in response to the first process being launched.
25. The electronic device of claim 20, wherein the request message issued by the kernel mode driver includes the stored token state information providing access to the first set of privileges for the user.
26. The electronic device of claim 25, wherein the response message received by the kernel mode driver includes the stored token state information that comprises the pointer to the data structure associated with the access token including the first set of privileges and an identifier of the user associated with the access token.
27. A non-transitory computer readable medium including software that, when executed by the processor, performs operations comprising:
- storing token state information associated with an access token in response to an event occurring before or during a launch of a process being monitored, the stored token state information including content to provide access to at least a first set of privileges for a user associated with the access token;
- responsive to detecting an event associated with the process being monitored, extracting at least a portion of current token state information for the access token and comparing at least the portion of the current token state information to at least a portion of the stored token state information;
- determining whether the access token has been changed legitimately by a software function that, upon execution, is permitted to change the access token;
- revising the stored token state information when the access token has been changed legitimately; and
- detecting a privilege escalation attack in response to a variation between at least the portion of the current token state information and at least the portion of the stored token state information when the access token has not been changed legitimately, wherein the variation includes at least (i) a difference between a pointer to a data structure of the access token included in the current state information and a pointer to the data structure of the access token included in the token state information, or (ii) a difference between a user identifier included in the current state information and a user identifier included in the token state information, or (iii) a difference between a set of privileges associated with the access token obtained via the pointer to the data structure of the access token included in the current state information and the first set of privileges.
4292580 | September 29, 1981 | Ott et al. |
5175732 | December 29, 1992 | Hendel et al. |
5319776 | June 7, 1994 | Hile et al. |
5440723 | August 8, 1995 | Arnold et al. |
5490249 | February 6, 1996 | Miller |
5657473 | August 12, 1997 | Killean et al. |
5802277 | September 1, 1998 | Cowlard |
5842002 | November 24, 1998 | Schnurer et al. |
5960170 | September 28, 1999 | Chen et al. |
5978917 | November 2, 1999 | Chi |
5983348 | November 9, 1999 | Ji |
6088803 | July 11, 2000 | Tso et al. |
6092194 | July 18, 2000 | Touboul |
6094677 | July 25, 2000 | Capek et al. |
6108799 | August 22, 2000 | Boulay et al. |
6154844 | November 28, 2000 | Touboul et al. |
6269330 | July 31, 2001 | Cidon et al. |
6272641 | August 7, 2001 | Ji |
6279113 | August 21, 2001 | Vaidya |
6298445 | October 2, 2001 | Shostack et al. |
6357008 | March 12, 2002 | Nachenberg |
6424627 | July 23, 2002 | Sorhaug et al. |
6442696 | August 27, 2002 | Wray et al. |
6484315 | November 19, 2002 | Ziese |
6487666 | November 26, 2002 | Shanklin et al. |
6493756 | December 10, 2002 | O'Brien et al. |
6550012 | April 15, 2003 | Villa et al. |
6775657 | August 10, 2004 | Baker |
6831893 | December 14, 2004 | Ben Nun et al. |
6832367 | December 14, 2004 | Choi et al. |
6895550 | May 17, 2005 | Kanchirayappa et al. |
6898632 | May 24, 2005 | Gordy et al. |
6907396 | June 14, 2005 | Muttik et al. |
6941348 | September 6, 2005 | Petry |
6971097 | November 29, 2005 | Wallman |
6981279 | December 27, 2005 | Arnold et al. |
7007107 | February 28, 2006 | Ivchenko et al. |
7028179 | April 11, 2006 | Anderson et al. |
7043757 | May 9, 2006 | Hoefelmeyer et al. |
7058822 | June 6, 2006 | Edery et al. |
7069316 | June 27, 2006 | Gryaznov |
7080407 | July 18, 2006 | Zhao et al. |
7080408 | July 18, 2006 | Pak et al. |
7093002 | August 15, 2006 | Wolff et al. |
7093239 | August 15, 2006 | van der Made |
7096498 | August 22, 2006 | Judge |
7100201 | August 29, 2006 | Izatt |
7107617 | September 12, 2006 | Hursey et al. |
7159149 | January 2, 2007 | Spiegel et al. |
7213260 | May 1, 2007 | Judge |
7231667 | June 12, 2007 | Jordan |
7240364 | July 3, 2007 | Branscomb et al. |
7240368 | July 3, 2007 | Roesch et al. |
7243371 | July 10, 2007 | Kasper et al. |
7249175 | July 24, 2007 | Donaldson |
7287278 | October 23, 2007 | Liang |
7308716 | December 11, 2007 | Danford et al. |
7328453 | February 5, 2008 | Merkle, Jr. et al. |
7346486 | March 18, 2008 | Ivancic et al. |
7356736 | April 8, 2008 | Natvig |
7386888 | June 10, 2008 | Liang et al. |
7392542 | June 24, 2008 | Bucher |
7418729 | August 26, 2008 | Szor |
7428300 | September 23, 2008 | Drew et al. |
7441272 | October 21, 2008 | Durham et al. |
7448084 | November 4, 2008 | Apap et al. |
7458098 | November 25, 2008 | Judge et al. |
7464404 | December 9, 2008 | Carpenter et al. |
7464407 | December 9, 2008 | Nakae et al. |
7467408 | December 16, 2008 | O'Toole, Jr. |
7478428 | January 13, 2009 | Thomlinson |
7480773 | January 20, 2009 | Reed |
7487543 | February 3, 2009 | Arnold et al. |
7496960 | February 24, 2009 | Chen et al. |
7496961 | February 24, 2009 | Zimmer et al. |
7519990 | April 14, 2009 | Xie |
7523493 | April 21, 2009 | Liang et al. |
7530104 | May 5, 2009 | Thrower et al. |
7540025 | May 26, 2009 | Tzadikario |
7546638 | June 9, 2009 | Anderson et al. |
7565550 | July 21, 2009 | Liang et al. |
7568233 | July 28, 2009 | Szor et al. |
7584455 | September 1, 2009 | Ball |
7603715 | October 13, 2009 | Costa et al. |
7607171 | October 20, 2009 | Marsden et al. |
7639714 | December 29, 2009 | Stolfo et al. |
7644441 | January 5, 2010 | Schmid et al. |
7657419 | February 2, 2010 | van der Made |
7676841 | March 9, 2010 | Sobchuk et al. |
7698548 | April 13, 2010 | Shelest et al. |
7707633 | April 27, 2010 | Danford et al. |
7712136 | May 4, 2010 | Sprosts et al. |
7730011 | June 1, 2010 | Deninger et al. |
7739740 | June 15, 2010 | Nachenberg et al. |
7779463 | August 17, 2010 | Stolfo et al. |
7784097 | August 24, 2010 | Stolfo et al. |
7832008 | November 9, 2010 | Kraemer |
7836502 | November 16, 2010 | Zhao et al. |
7849506 | December 7, 2010 | Dansey et al. |
7854007 | December 14, 2010 | Sprosts et al. |
7869073 | January 11, 2011 | Oshima |
7877803 | January 25, 2011 | Enstone et al. |
7904959 | March 8, 2011 | Sidiroglou et al. |
7908660 | March 15, 2011 | Bahl |
7930738 | April 19, 2011 | Petersen |
7937387 | May 3, 2011 | Frazier et al. |
7937761 | May 3, 2011 | Bennett |
7949849 | May 24, 2011 | Lowe et al. |
7996556 | August 9, 2011 | Raghavan et al. |
7996836 | August 9, 2011 | McCorkendale et al. |
7996904 | August 9, 2011 | Chiueh et al. |
7996905 | August 9, 2011 | Arnold et al. |
8006305 | August 23, 2011 | Aziz |
8010667 | August 30, 2011 | Zhang et al. |
8020206 | September 13, 2011 | Hubbard et al. |
8028338 | September 27, 2011 | Schneider et al. |
8042184 | October 18, 2011 | Batenin |
8045094 | October 25, 2011 | Teragawa |
8045458 | October 25, 2011 | Alperovitch et al. |
8069484 | November 29, 2011 | McMillan et al. |
8087086 | December 27, 2011 | Lai et al. |
8171553 | May 1, 2012 | Aziz et al. |
8176049 | May 8, 2012 | Deninger et al. |
8176480 | May 8, 2012 | Spertus |
8201246 | June 12, 2012 | Wu et al. |
8204984 | June 19, 2012 | Aziz et al. |
8214905 | July 3, 2012 | Doukhvalov et al. |
8220055 | July 10, 2012 | Kennedy |
8225288 | July 17, 2012 | Miller et al. |
8225373 | July 17, 2012 | Kraemer |
8233882 | July 31, 2012 | Rogel |
8234640 | July 31, 2012 | Fitzgerald et al. |
8234709 | July 31, 2012 | Viljoen et al. |
8239944 | August 7, 2012 | Nachenberg et al. |
8260914 | September 4, 2012 | Ranjan |
8266091 | September 11, 2012 | Gubin et al. |
8286251 | October 9, 2012 | Eker et al. |
8291499 | October 16, 2012 | Aziz et al. |
8307435 | November 6, 2012 | Mann et al. |
8307443 | November 6, 2012 | Wang et al. |
8312545 | November 13, 2012 | Fuvell et al. |
8321936 | November 27, 2012 | Green et al. |
8321941 | November 27, 2012 | Fuvell et al. |
8332571 | December 11, 2012 | Edwards, Sr. |
8365286 | January 29, 2013 | Poston |
8365297 | January 29, 2013 | Parshin et al. |
8370938 | February 5, 2013 | Daswani et al. |
8370939 | February 5, 2013 | Zaitsev et al. |
8375444 | February 12, 2013 | Aziz et al. |
8381299 | February 19, 2013 | Stolfo et al. |
8402529 | March 19, 2013 | Green et al. |
8464340 | June 11, 2013 | Ahn et al. |
8479174 | July 2, 2013 | Chiriac |
8479276 | July 2, 2013 | Vaystikh et al. |
8479291 | July 2, 2013 | Bodke |
8510827 | August 13, 2013 | Leake et al. |
8510828 | August 13, 2013 | Guo et al. |
8510842 | August 13, 2013 | Amit et al. |
8516478 | August 20, 2013 | Edwards et al. |
8516590 | August 20, 2013 | Ranadive et al. |
8516593 | August 20, 2013 | Aziz |
8522348 | August 27, 2013 | Chen et al. |
8528086 | September 3, 2013 | Aziz |
8533824 | September 10, 2013 | Hutton et al. |
8539582 | September 17, 2013 | Aziz et al. |
8549638 | October 1, 2013 | Aziz |
8555391 | October 8, 2013 | Demir et al. |
8561177 | October 15, 2013 | Aziz et al. |
8566476 | October 22, 2013 | Shiffer et al. |
8566946 | October 22, 2013 | Aziz et al. |
8584094 | November 12, 2013 | Dadhia et al. |
8584234 | November 12, 2013 | Sobel et al. |
8584239 | November 12, 2013 | Aziz et al. |
8595834 | November 26, 2013 | Xie et al. |
8627476 | January 7, 2014 | Satish et al. |
8635696 | January 21, 2014 | Aziz |
8682054 | March 25, 2014 | Xue et al. |
8682812 | March 25, 2014 | Ranjan |
8689333 | April 1, 2014 | Aziz |
8695096 | April 8, 2014 | Zhang |
8713631 | April 29, 2014 | Pavlyushchik |
8713681 | April 29, 2014 | Silberman et al. |
8726392 | May 13, 2014 | McCorkendale et al. |
8739280 | May 27, 2014 | Chess et al. |
8776229 | July 8, 2014 | Aziz |
8782792 | July 15, 2014 | Bodke |
8789172 | July 22, 2014 | Stolfo et al. |
8789178 | July 22, 2014 | Kejriwal et al. |
8793278 | July 29, 2014 | Frazier et al. |
8793787 | July 29, 2014 | Ismael et al. |
8805947 | August 12, 2014 | Kuzkin et al. |
8806647 | August 12, 2014 | Daswani et al. |
8832829 | September 9, 2014 | Manni et al. |
8850570 | September 30, 2014 | Ramzan |
8850571 | September 30, 2014 | Staniford et al. |
8881234 | November 4, 2014 | Narasimhan et al. |
8881271 | November 4, 2014 | Butler, II |
8881282 | November 4, 2014 | Aziz et al. |
8898788 | November 25, 2014 | Aziz et al. |
8935779 | January 13, 2015 | Manni et al. |
8949257 | February 3, 2015 | Shiffer et al. |
8984638 | March 17, 2015 | Aziz et al. |
8990939 | March 24, 2015 | Staniford et al. |
8990944 | March 24, 2015 | Singh et al. |
8997219 | March 31, 2015 | Staniford et al. |
9009822 | April 14, 2015 | Ismael et al. |
9009823 | April 14, 2015 | Ismael et al. |
9027135 | May 5, 2015 | Aziz |
9071638 | June 30, 2015 | Aziz et al. |
9104867 | August 11, 2015 | Thioux et al. |
9106630 | August 11, 2015 | Frazier et al. |
9106694 | August 11, 2015 | Aziz et al. |
9118715 | August 25, 2015 | Staniford et al. |
9159035 | October 13, 2015 | Ismael et al. |
9171160 | October 27, 2015 | Vincent et al. |
9176843 | November 3, 2015 | Ismael et al. |
9189627 | November 17, 2015 | Islam |
9195829 | November 24, 2015 | Goradia et al. |
9197664 | November 24, 2015 | Aziz et al. |
9223972 | December 29, 2015 | Vincent et al. |
9225740 | December 29, 2015 | Ismael et al. |
9241010 | January 19, 2016 | Bennett et al. |
9251343 | February 2, 2016 | Vincent et al. |
9262635 | February 16, 2016 | Paithane et al. |
9268936 | February 23, 2016 | Butler |
9275229 | March 1, 2016 | LeMasters |
9282109 | March 8, 2016 | Aziz et al. |
9292686 | March 22, 2016 | Ismael et al. |
9294501 | March 22, 2016 | Mesdaq et al. |
9300686 | March 29, 2016 | Pidathala et al. |
9306960 | April 5, 2016 | Aziz |
9306974 | April 5, 2016 | Aziz et al. |
9311479 | April 12, 2016 | Manni et al. |
9355247 | May 31, 2016 | Thioux et al. |
9356944 | May 31, 2016 | Aziz |
9363280 | June 7, 2016 | Rivlin et al. |
9367681 | June 14, 2016 | Ismael et al. |
9398028 | July 19, 2016 | Karandikar et al. |
9413781 | August 9, 2016 | Cunningham et al. |
9426071 | August 23, 2016 | Caldejon et al. |
9430646 | August 30, 2016 | Mushtaq et al. |
9432389 | August 30, 2016 | Khalid et al. |
9438613 | September 6, 2016 | Paithane et al. |
9438622 | September 6, 2016 | Staniford et al. |
9438623 | September 6, 2016 | Thioux et al. |
9459901 | October 4, 2016 | Jung et al. |
9467460 | October 11, 2016 | Otvagin et al. |
9483644 | November 1, 2016 | Paithane et al. |
9495180 | November 15, 2016 | Ismael |
9497213 | November 15, 2016 | Thompson et al. |
9507935 | November 29, 2016 | Ismael et al. |
9516057 | December 6, 2016 | Aziz |
9519782 | December 13, 2016 | Aziz et al. |
9536091 | January 3, 2017 | Paithane et al. |
9537972 | January 3, 2017 | Edwards et al. |
9560059 | January 31, 2017 | Islam |
9565202 | February 7, 2017 | Kindlund et al. |
9591015 | March 7, 2017 | Amin et al. |
9591020 | March 7, 2017 | Aziz |
9594904 | March 14, 2017 | Jain et al. |
9594905 | March 14, 2017 | Ismael et al. |
9594912 | March 14, 2017 | Thioux et al. |
9609007 | March 28, 2017 | Rivlin et al. |
9626509 | April 18, 2017 | Khalid et al. |
9628498 | April 18, 2017 | Aziz et al. |
9628507 | April 18, 2017 | Haq et al. |
9633134 | April 25, 2017 | Ross |
9635039 | April 25, 2017 | Islam et al. |
9641546 | May 2, 2017 | Manni et al. |
9654485 | May 16, 2017 | Neumann |
9661009 | May 23, 2017 | Karandikar et al. |
9661018 | May 23, 2017 | Aziz |
9674298 | June 6, 2017 | Edwards et al. |
9680862 | June 13, 2017 | Ismael et al. |
9690606 | June 27, 2017 | Ha et al. |
9690933 | June 27, 2017 | Singh et al. |
9690935 | June 27, 2017 | Shiffer et al. |
9690936 | June 27, 2017 | Malik et al. |
9736179 | August 15, 2017 | Ismael |
9740857 | August 22, 2017 | Ismael et al. |
9747446 | August 29, 2017 | Pidathala et al. |
9756074 | September 5, 2017 | Aziz et al. |
9773112 | September 26, 2017 | Rathor et al. |
9781144 | October 3, 2017 | Otvagin et al. |
9787700 | October 10, 2017 | Amin et al. |
9787706 | October 10, 2017 | Otvagin et al. |
9792196 | October 17, 2017 | Ismael et al. |
9824209 | November 21, 2017 | Ismael et al. |
9824211 | November 21, 2017 | Wilson |
9824216 | November 21, 2017 | Khalid et al. |
9825976 | November 21, 2017 | Gomez et al. |
9825989 | November 21, 2017 | Mehra et al. |
9838408 | December 5, 2017 | Karandikar et al. |
9838411 | December 5, 2017 | Aziz |
9838416 | December 5, 2017 | Aziz |
9838417 | December 5, 2017 | Khalid et al. |
9846776 | December 19, 2017 | Paithane et al. |
9876701 | January 23, 2018 | Caldejon et al. |
9888016 | February 6, 2018 | Amin et al. |
9888019 | February 6, 2018 | Pidathala et al. |
9910988 | March 6, 2018 | Vincent et al. |
9912644 | March 6, 2018 | Cunningham |
9912681 | March 6, 2018 | Ismael et al. |
9912684 | March 6, 2018 | Aziz et al. |
9912691 | March 6, 2018 | Mesdaq et al. |
9912698 | March 6, 2018 | Thioux et al. |
9916440 | March 13, 2018 | Paithane et al. |
9921978 | March 20, 2018 | Chan et al. |
9934376 | April 3, 2018 | Ismael |
9934381 | April 3, 2018 | Kindlund et al. |
9946568 | April 17, 2018 | Ismael et al. |
9954890 | April 24, 2018 | Staniford et al. |
9973531 | May 15, 2018 | Thioux |
10002252 | June 19, 2018 | Ismael et al. |
10019338 | July 10, 2018 | Goradia et al. |
10019573 | July 10, 2018 | Silberman et al. |
10025691 | July 17, 2018 | Ismael et al. |
10025927 | July 17, 2018 | Khalid et al. |
10027689 | July 17, 2018 | Rathor et al. |
10027690 | July 17, 2018 | Aziz et al. |
10027696 | July 17, 2018 | Rivlin et al. |
10033747 | July 24, 2018 | Paithane et al. |
10033748 | July 24, 2018 | Cunningham et al. |
10033753 | July 24, 2018 | Islam et al. |
10033759 | July 24, 2018 | Kabra et al. |
10050998 | August 14, 2018 | Singh |
10068091 | September 4, 2018 | Aziz et al. |
10075455 | September 11, 2018 | Zafar et al. |
10083302 | September 25, 2018 | Paithane et al. |
10084813 | September 25, 2018 | Eyada |
10089461 | October 2, 2018 | Ha et al. |
10097573 | October 9, 2018 | Aziz |
10104102 | October 16, 2018 | Neumann |
10108446 | October 23, 2018 | Steinberg et al. |
10121000 | November 6, 2018 | Rivlin et al. |
10122746 | November 6, 2018 | Manni et al. |
10133863 | November 20, 2018 | Bu et al. |
10133866 | November 20, 2018 | Kumar et al. |
10146810 | December 4, 2018 | Shiffer et al. |
10148693 | December 4, 2018 | Singh et al. |
10165000 | December 25, 2018 | Aziz et al. |
10169585 | January 1, 2019 | Pilipenko et al. |
10176321 | January 8, 2019 | Abbasi et al. |
10181029 | January 15, 2019 | Ismael et al. |
10191861 | January 29, 2019 | Steinberg et al. |
10192052 | January 29, 2019 | Singh et al. |
10198574 | February 5, 2019 | Thioux et al. |
10200384 | February 5, 2019 | Mushtaq et al. |
10210329 | February 19, 2019 | Malik et al. |
10216927 | February 26, 2019 | Steinberg |
10218740 | February 26, 2019 | Mesdaq et al. |
10242185 | March 26, 2019 | Goradia |
20010005889 | June 28, 2001 | Albrecht |
20010047326 | November 29, 2001 | Broadbent et al. |
20020018903 | February 14, 2002 | Kokubo et al. |
20020038430 | March 28, 2002 | Edwards et al. |
20020091819 | July 11, 2002 | Melchione et al. |
20020095607 | July 18, 2002 | Lin-Hendel |
20020116627 | August 22, 2002 | Tarbotton et al. |
20020144156 | October 3, 2002 | Copeland |
20020162015 | October 31, 2002 | Fang |
20020166063 | November 7, 2002 | Lachman et al. |
20020169952 | November 14, 2002 | DiSanto et al. |
20020184528 | December 5, 2002 | Shevenell et al. |
20020188887 | December 12, 2002 | Largman et al. |
20020194490 | December 19, 2002 | Halperin et al. |
20030021728 | January 30, 2003 | Sharpe et al. |
20030074578 | April 17, 2003 | Ford et al. |
20030084318 | May 1, 2003 | Schertz |
20030101381 | May 29, 2003 | Mateev et al. |
20030115483 | June 19, 2003 | Liang |
20030188190 | October 2, 2003 | Aaron et al. |
20030191957 | October 9, 2003 | Hypponen et al. |
20030200460 | October 23, 2003 | Morota et al. |
20030212902 | November 13, 2003 | van der Made |
20030229801 | December 11, 2003 | Kouznetsov et al. |
20030237000 | December 25, 2003 | Denton et al. |
20040003323 | January 1, 2004 | Bennett et al. |
20040006473 | January 8, 2004 | Mills et al. |
20040015712 | January 22, 2004 | Szor |
20040019832 | January 29, 2004 | Arnold et al. |
20040047356 | March 11, 2004 | Bauer |
20040083408 | April 29, 2004 | Spiegel et al. |
20040088581 | May 6, 2004 | Brawn et al. |
20040093513 | May 13, 2004 | Cantrell et al. |
20040111531 | June 10, 2004 | Staniford et al. |
20040117478 | June 17, 2004 | Triulzi et al. |
20040117624 | June 17, 2004 | Brandt et al. |
20040128355 | July 1, 2004 | Chao et al. |
20040165588 | August 26, 2004 | Pandya |
20040236963 | November 25, 2004 | Danford et al. |
20040243349 | December 2, 2004 | Greifeneder et al. |
20040249911 | December 9, 2004 | Alkhatib et al. |
20040255161 | December 16, 2004 | Cavanaugh |
20040268147 | December 30, 2004 | Wiederin et al. |
20050005159 | January 6, 2005 | Oliphant |
20050021740 | January 27, 2005 | Bar et al. |
20050033960 | February 10, 2005 | Vialen et al. |
20050033989 | February 10, 2005 | Poletto et al. |
20050050148 | March 3, 2005 | Mohammadioun et al. |
20050086523 | April 21, 2005 | Zimmer et al. |
20050091513 | April 28, 2005 | Mitomo et al. |
20050091533 | April 28, 2005 | Omote et al. |
20050091652 | April 28, 2005 | Ross et al. |
20050108562 | May 19, 2005 | Khazan et al. |
20050114663 | May 26, 2005 | Cornell et al. |
20050125195 | June 9, 2005 | Brendel |
20050149726 | July 7, 2005 | Joshi et al. |
20050157662 | July 21, 2005 | Bingham et al. |
20050183143 | August 18, 2005 | Anderholm et al. |
20050201297 | September 15, 2005 | Peikari |
20050210533 | September 22, 2005 | Copeland et al. |
20050238005 | October 27, 2005 | Chen et al. |
20050240781 | October 27, 2005 | Gassoway |
20050262562 | November 24, 2005 | Gassoway |
20050265331 | December 1, 2005 | Stolfo |
20050283839 | December 22, 2005 | Cowburn |
20060010495 | January 12, 2006 | Cohen et al. |
20060015416 | January 19, 2006 | Hoffman et al. |
20060015715 | January 19, 2006 | Anderson |
20060015747 | January 19, 2006 | Van de Ven |
20060021029 | January 26, 2006 | Brickell et al. |
20060021054 | January 26, 2006 | Costa et al. |
20060031476 | February 9, 2006 | Mathes et al. |
20060047665 | March 2, 2006 | Neil |
20060070130 | March 30, 2006 | Costea et al. |
20060075496 | April 6, 2006 | Carpenter et al. |
20060095968 | May 4, 2006 | Portolani et al. |
20060101516 | May 11, 2006 | Sudaharan et al. |
20060101517 | May 11, 2006 | Banzhof et al. |
20060117385 | June 1, 2006 | Mester et al. |
20060123477 | June 8, 2006 | Raghavan et al. |
20060143709 | June 29, 2006 | Brooks et al. |
20060150249 | July 6, 2006 | Gassen et al. |
20060161983 | July 20, 2006 | Cothrell et al. |
20060161987 | July 20, 2006 | Levy-Yurista |
20060161989 | July 20, 2006 | Reshef et al. |
20060164199 | July 27, 2006 | Gilde et al. |
20060173992 | August 3, 2006 | Weber et al. |
20060179147 | August 10, 2006 | Tran et al. |
20060184632 | August 17, 2006 | Marino et al. |
20060191010 | August 24, 2006 | Benjamin |
20060221956 | October 5, 2006 | Narayan et al. |
20060236393 | October 19, 2006 | Kramer et al. |
20060242709 | October 26, 2006 | Seinfeld et al. |
20060248519 | November 2, 2006 | Jaeger et al. |
20060248582 | November 2, 2006 | Panjwani et al. |
20060251104 | November 9, 2006 | Koga |
20060288417 | December 21, 2006 | Bookbinder et al. |
20070006288 | January 4, 2007 | Mayfield et al. |
20070006313 | January 4, 2007 | Porras et al. |
20070011174 | January 11, 2007 | Takaragi et al. |
20070016951 | January 18, 2007 | Piccard et al. |
20070019286 | January 25, 2007 | Kikuchi |
20070033645 | February 8, 2007 | Jones |
20070038943 | February 15, 2007 | FitzGerald et al. |
20070064689 | March 22, 2007 | Shin et al. |
20070074169 | March 29, 2007 | Chess et al. |
20070094730 | April 26, 2007 | Bhikkaji et al. |
20070101435 | May 3, 2007 | Konanka et al. |
20070128855 | June 7, 2007 | Cho et al. |
20070142030 | June 21, 2007 | Sinha et al. |
20070143827 | June 21, 2007 | Nicodemus et al. |
20070156895 | July 5, 2007 | Vuong |
20070157180 | July 5, 2007 | Tillmann et al. |
20070157306 | July 5, 2007 | Elrod et al. |
20070168988 | July 19, 2007 | Eisner et al. |
20070171824 | July 26, 2007 | Ruello et al. |
20070174915 | July 26, 2007 | Gribble et al. |
20070192500 | August 16, 2007 | Lum |
20070192858 | August 16, 2007 | Lum |
20070198275 | August 23, 2007 | Malden et al. |
20070208822 | September 6, 2007 | Wang et al. |
20070220607 | September 20, 2007 | Sprosts et al. |
20070240218 | October 11, 2007 | Tuvell et al. |
20070240219 | October 11, 2007 | Fuvell et al. |
20070240220 | October 11, 2007 | Fuvell et al. |
20070240222 | October 11, 2007 | Fuvell et al. |
20070250930 | October 25, 2007 | Aziz et al. |
20070256132 | November 1, 2007 | Oliphant |
20070271446 | November 22, 2007 | Nakamura |
20080005782 | January 3, 2008 | Aziz |
20080018122 | January 24, 2008 | Zierler et al. |
20080028463 | January 31, 2008 | Dagon et al. |
20080040710 | February 14, 2008 | Chiriac |
20080046781 | February 21, 2008 | Childs et al. |
20080066179 | March 13, 2008 | Liu |
20080072326 | March 20, 2008 | Danford et al. |
20080077793 | March 27, 2008 | Tan et al. |
20080080518 | April 3, 2008 | Hoeflin et al. |
20080086720 | April 10, 2008 | Lekel |
20080098476 | April 24, 2008 | Syversen |
20080120722 | May 22, 2008 | Sima et al. |
20080134178 | June 5, 2008 | Fitzgerald et al. |
20080134334 | June 5, 2008 | Kim et al. |
20080141376 | June 12, 2008 | Clausen et al. |
20080184367 | July 31, 2008 | McMillan et al. |
20080184373 | July 31, 2008 | Traut et al. |
20080189787 | August 7, 2008 | Arnold et al. |
20080201778 | August 21, 2008 | Guo et al. |
20080209557 | August 28, 2008 | Herley et al. |
20080215742 | September 4, 2008 | Goldszmidt et al. |
20080222729 | September 11, 2008 | Chen et al. |
20080263665 | October 23, 2008 | Ma et al. |
20080295172 | November 27, 2008 | Bohacek |
20080301810 | December 4, 2008 | Lehane et al. |
20080307524 | December 11, 2008 | Singh et al. |
20080313738 | December 18, 2008 | Enderby |
20080320594 | December 25, 2008 | Jiang |
20090003317 | January 1, 2009 | Kasralikar et al. |
20090007100 | January 1, 2009 | Field et al. |
20090013408 | January 8, 2009 | Schipka |
20090031423 | January 29, 2009 | Liu et al. |
20090036111 | February 5, 2009 | Danford et al. |
20090037835 | February 5, 2009 | Goldman |
20090044024 | February 12, 2009 | Oberheide et al. |
20090044274 | February 12, 2009 | Budko et al. |
20090064332 | March 5, 2009 | Porras et al. |
20090077666 | March 19, 2009 | Chen et al. |
20090083369 | March 26, 2009 | Marmor |
20090083855 | March 26, 2009 | Apap et al. |
20090089879 | April 2, 2009 | Wang et al. |
20090094697 | April 9, 2009 | Proves et al. |
20090113425 | April 30, 2009 | Ports et al. |
20090125976 | May 14, 2009 | Wassermann et al. |
20090126015 | May 14, 2009 | Monastyrsky et al. |
20090126016 | May 14, 2009 | Sobko et al. |
20090133125 | May 21, 2009 | Choi et al. |
20090144823 | June 4, 2009 | Lamastra et al. |
20090158430 | June 18, 2009 | Borders |
20090172815 | July 2, 2009 | Gu et al. |
20090187992 | July 23, 2009 | Poston |
20090193293 | July 30, 2009 | Stolfo et al. |
20090198651 | August 6, 2009 | Shiffer et al. |
20090198670 | August 6, 2009 | Shiffer et al. |
20090198689 | August 6, 2009 | Frazier et al. |
20090199274 | August 6, 2009 | Frazier et al. |
20090199296 | August 6, 2009 | Xie et al. |
20090228233 | September 10, 2009 | Anderson et al. |
20090241187 | September 24, 2009 | Troyansky |
20090241190 | September 24, 2009 | Todd et al. |
20090265692 | October 22, 2009 | Godefroid et al. |
20090271867 | October 29, 2009 | Zhang |
20090300415 | December 3, 2009 | Zhang et al. |
20090300761 | December 3, 2009 | Park et al. |
20090328185 | December 31, 2009 | Berg et al. |
20090328221 | December 31, 2009 | Blumfield et al. |
20100005146 | January 7, 2010 | Drako et al. |
20100011205 | January 14, 2010 | McKenna |
20100017546 | January 21, 2010 | Poo et al. |
20100030996 | February 4, 2010 | Butler, II |
20100031353 | February 4, 2010 | Thomas et al. |
20100037314 | February 11, 2010 | Perdisci et al. |
20100043073 | February 18, 2010 | Kuwamura |
20100054278 | March 4, 2010 | Stolfo et al. |
20100058474 | March 4, 2010 | Hicks |
20100064044 | March 11, 2010 | Nonoyama |
20100077481 | March 25, 2010 | Polyakov et al. |
20100083376 | April 1, 2010 | Pereira et al. |
20100115621 | May 6, 2010 | Staniford et al. |
20100132038 | May 27, 2010 | Zaitsev |
20100154056 | June 17, 2010 | Smith et al. |
20100180344 | July 15, 2010 | Malyshev et al. |
20100192223 | July 29, 2010 | Ismael et al. |
20100220863 | September 2, 2010 | Dupaquis et al. |
20100235831 | September 16, 2010 | Dittmer |
20100251104 | September 30, 2010 | Massand |
20100281102 | November 4, 2010 | Chinta et al. |
20100281541 | November 4, 2010 | Stolfo et al. |
20100281542 | November 4, 2010 | Stolfo et al. |
20100287260 | November 11, 2010 | Peterson et al. |
20100299754 | November 25, 2010 | Amit et al. |
20100306173 | December 2, 2010 | Frank |
20110004737 | January 6, 2011 | Greenebaum |
20110025504 | February 3, 2011 | Lyon et al. |
20110041179 | February 17, 2011 | St Hlberg |
20110047594 | February 24, 2011 | Mahaffey et al. |
20110047620 | February 24, 2011 | Mahaffey et al. |
20110055907 | March 3, 2011 | Narasimhan et al. |
20110078794 | March 31, 2011 | Manni et al. |
20110093951 | April 21, 2011 | Aziz |
20110099620 | April 28, 2011 | Stavrou et al. |
20110099633 | April 28, 2011 | Aziz |
20110099635 | April 28, 2011 | Silberman et al. |
20110113231 | May 12, 2011 | Kaminsky |
20110145918 | June 16, 2011 | Jung et al. |
20110145920 | June 16, 2011 | Mahaffey et al. |
20110145934 | June 16, 2011 | Abramovici et al. |
20110167493 | July 7, 2011 | Song et al. |
20110167494 | July 7, 2011 | Bowen et al. |
20110173213 | July 14, 2011 | Frazier et al. |
20110173460 | July 14, 2011 | Ito et al. |
20110219449 | September 8, 2011 | St. Neitzel et al. |
20110219450 | September 8, 2011 | McDougal et al. |
20110225624 | September 15, 2011 | Sawhney et al. |
20110225655 | September 15, 2011 | Niemela et al. |
20110247072 | October 6, 2011 | Staniford et al. |
20110265182 | October 27, 2011 | Peinado et al. |
20110289582 | November 24, 2011 | Kejriwal et al. |
20110302587 | December 8, 2011 | Nishikawa et al. |
20110307954 | December 15, 2011 | Melnik et al. |
20110307955 | December 15, 2011 | Kaplan et al. |
20110307956 | December 15, 2011 | Yermakov et al. |
20110314546 | December 22, 2011 | Aziz et al. |
20120023593 | January 26, 2012 | Puder et al. |
20120054869 | March 1, 2012 | Yen et al. |
20120066698 | March 15, 2012 | Yanoo |
20120079596 | March 29, 2012 | Thomas et al. |
20120084859 | April 5, 2012 | Radinsky et al. |
20120096553 | April 19, 2012 | Srivastava et al. |
20120110667 | May 3, 2012 | Zubrilin et al. |
20120117652 | May 10, 2012 | Manni et al. |
20120121154 | May 17, 2012 | Xue et al. |
20120124426 | May 17, 2012 | Maybee et al. |
20120174186 | July 5, 2012 | Aziz et al. |
20120174196 | July 5, 2012 | Bhogavilli et al. |
20120174218 | July 5, 2012 | McCoy et al. |
20120198279 | August 2, 2012 | Schroeder |
20120210423 | August 16, 2012 | Friedrichs et al. |
20120222121 | August 30, 2012 | Staniford et al. |
20120255015 | October 4, 2012 | Sahita et al. |
20120255017 | October 4, 2012 | Sallam |
20120260342 | October 11, 2012 | Dube et al. |
20120266244 | October 18, 2012 | Green et al. |
20120278886 | November 1, 2012 | Luna |
20120297489 | November 22, 2012 | Dequevy |
20120330801 | December 27, 2012 | McDougal et al. |
20120331553 | December 27, 2012 | Aziz et al. |
20130014259 | January 10, 2013 | Gribble et al. |
20130036472 | February 7, 2013 | Aziz |
20130047257 | February 21, 2013 | Aziz |
20130074185 | March 21, 2013 | McDougal et al. |
20130086684 | April 4, 2013 | Mohler |
20130097699 | April 18, 2013 | Balupar et al. |
20130097706 | April 18, 2013 | Titonis et al. |
20130111587 | May 2, 2013 | Goel et al. |
20130117852 | May 9, 2013 | Stute |
20130117855 | May 9, 2013 | Kim et al. |
20130139264 | May 30, 2013 | Brinkley et al. |
20130160125 | June 20, 2013 | Likhachev et al. |
20130160127 | June 20, 2013 | Jeong et al. |
20130160130 | June 20, 2013 | Mendelev et al. |
20130160131 | June 20, 2013 | Madou et al. |
20130167236 | June 27, 2013 | Sick |
20130174214 | July 4, 2013 | Duncan |
20130185789 | July 18, 2013 | Hagiwara et al. |
20130185795 | July 18, 2013 | Winn et al. |
20130185798 | July 18, 2013 | Saunders et al. |
20130191915 | July 25, 2013 | Antonakakis et al. |
20130196649 | August 1, 2013 | Paddon et al. |
20130227691 | August 29, 2013 | Aziz et al. |
20130246370 | September 19, 2013 | Bartram et al. |
20130247186 | September 19, 2013 | LeMasters |
20130263260 | October 3, 2013 | Mahaffey et al. |
20130291109 | October 31, 2013 | Staniford et al. |
20130298243 | November 7, 2013 | Kumar et al. |
20130318038 | November 28, 2013 | Shiffer et al. |
20130318073 | November 28, 2013 | Shiffer et al. |
20130325791 | December 5, 2013 | Shiffer et al. |
20130325792 | December 5, 2013 | Shiffer et al. |
20130325871 | December 5, 2013 | Shiffer et al. |
20130325872 | December 5, 2013 | Shiffer et al. |
20140032875 | January 30, 2014 | Butler |
20140053260 | February 20, 2014 | Gupta et al. |
20140053261 | February 20, 2014 | Gupta et al. |
20140130158 | May 8, 2014 | Wang et al. |
20140137180 | May 15, 2014 | Lukacs et al. |
20140169762 | June 19, 2014 | Ryu |
20140179360 | June 26, 2014 | Jackson et al. |
20140181131 | June 26, 2014 | Ross |
20140189687 | July 3, 2014 | Jung et al. |
20140189866 | July 3, 2014 | Shiffer et al. |
20140189882 | July 3, 2014 | Jung et al. |
20140237600 | August 21, 2014 | Silberman et al. |
20140280245 | September 18, 2014 | Wilson |
20140283037 | September 18, 2014 | Sikorski et al. |
20140283063 | September 18, 2014 | Thompson et al. |
20140328204 | November 6, 2014 | Klotsche et al. |
20140337836 | November 13, 2014 | Ismael |
20140344926 | November 20, 2014 | Cunningham et al. |
20140351930 | November 27, 2014 | Sun |
20140351935 | November 27, 2014 | Shao et al. |
20140380473 | December 25, 2014 | Bu et al. |
20140380474 | December 25, 2014 | Paithane et al. |
20150007312 | January 1, 2015 | Pidathala et al. |
20150096022 | April 2, 2015 | Vincent et al. |
20150096023 | April 2, 2015 | Mesdaq et al. |
20150096024 | April 2, 2015 | Haq et al. |
20150096025 | April 2, 2015 | Ismael |
20150180886 | June 25, 2015 | Staniford et al. |
20150186645 | July 2, 2015 | Aziz et al. |
20150199513 | July 16, 2015 | Ismael et al. |
20150199531 | July 16, 2015 | Ismael et al. |
20150199532 | July 16, 2015 | Ismael et al. |
20150220735 | August 6, 2015 | Paithane et al. |
20150372980 | December 24, 2015 | Eyada |
20160004869 | January 7, 2016 | Ismael et al. |
20160006756 | January 7, 2016 | Ismael et al. |
20160044000 | February 11, 2016 | Cunningham |
20160127393 | May 5, 2016 | Aziz et al. |
20160191547 | June 30, 2016 | Zafar et al. |
20160191550 | June 30, 2016 | Ismael et al. |
20160261612 | September 8, 2016 | Mesdaq et al. |
20160285914 | September 29, 2016 | Singh et al. |
20160301703 | October 13, 2016 | Aziz |
20160335110 | November 17, 2016 | Paithane et al. |
20170083703 | March 23, 2017 | Abbasi et al. |
20180013770 | January 11, 2018 | Ismael |
20180048660 | February 15, 2018 | Paithane et al. |
20180121316 | May 3, 2018 | Ismael et al. |
20180288077 | October 4, 2018 | Siddiqui et al. |
20190268152 | August 29, 2019 | Sandoval |
2439806 | January 2008 | GB |
2490431 | October 2012 | GB |
0206928 | January 2002 | WO |
02/23805 | March 2002 | WO |
2007117636 | October 2007 | WO |
2008/041950 | April 2008 | WO |
2011/084431 | July 2011 | WO |
2011/112348 | September 2011 | WO |
2012/075336 | June 2012 | WO |
2012/145066 | October 2012 | WO |
2013/067505 | May 2013 | WO |
- Venezia, Paul, “NetDetector Captures Intrusions”, InfoWorld Issue 27, (“Venezia”), (Jul. 14, 2003).
- Vladimir Getov: “Security as a Service in Smart Clouds—Opportunities and Concerns”, Computer Software and Applications Conference (COMPSAC), 2012 IEEE 36th Annual, IEEE, Jul. 16, 2012 (Jul. 16, 2012).
- Wahid et al., Characterising the Evolution in Scanning Activity of Suspicious Hosts, Oct. 2009, Third International Conference on Network and System Security, pp. 344-350.
- Whyte, et al., “DNS-Based Detection of Scanning Works in an Enterprise Network”, Proceedings of the 12th Annual Network and Distributed System Security Symposium, (Feb. 2005), 15 pages.
- Williamson, Matthew M., “Throttling Viruses: Restricting Propagation to Defeat Malicious Mobile Code”, ACSAC Conference, Las Vegas, NV, USA, (Dec. 2002), pp. 1-9.
- Yuhei Kawakoya et al: “Memory behavior-based automatic malware unpacking in stealth debugging environment”, Malicious and Unwanted Software (Malware), 2010 5th International Conference on, IEEE, Piscataway, NJ, USA, Oct. 19, 2010, pp. 39-46, XP031833827, ISBN:978-1-4244-8-9353-1.
- Zhang et al., The Effects of Threading, Infection Time, and Multiple-Attacker Collaboration on Malware Propagation, Sep. 2009, IEEE 28th International Symposium on Reliable Distributed Systems, pp. 73-82.
- “Mining Specification of Malicious Behavior”—Jha et al., UCSB, Sep. 2007 https://www.cs.ucsb.edu/.about.chris/research/doc/esec07.sub.-mining.pdf-.
- “Network Security: NetDetector—Network Intrusion Forensic System (NIFS) Whitepaper”, (“NetDetector Whitepaper”), (2003).
- “When Virtual is Better Than Real”, IEEEXplore Digital Library, available at, http://ieeexplore.ieee.org/xpl/articleDetails.isp?reload=true&arnumbe- r=990073, (Dec. 7, 2013).
- Abdullah, et al., Visualizing Network Data for Intrusion Detection, 2005 IEEE Workshop on Information Assurance and Security, pp. 100-108.
- Adetoye, Adedayo , et al., “Network Intrusion Detection & Response System”, (“Adetoye”), (Sep. 2003).
- Apostolopoulos, George; hassapis, Constantinos; “V-eM: A cluster of Virtual Machines for Robust, Detailed, and High-Performance Network Emulation”, 14th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems, Sep. 11-14, 2006, pp. 117-126.
- Aura, Tuomas, “Scanning electronic documents for personally identifiable information”, Proceedings of the 5th ACM workshop on Privacy in electronic society. ACM, 2006.
- Baecher, “The Nepenthes Platform: An Efficient Approach to collect Malware”, Springer-verlag Berlin Heidelberg, (2006), pp. 165-184.
- Bayer, et al., “Dynamic Analysis of Malicious Code”, J Comput Virol, Springer-Verlag, France., (2006), pp. 67-77.
- Boubalos, Chris , “extracting syslog data out of raw pcap dumps, seclists.org, Honeypots mailing list archives”, available at http://seclists.org/honeypots/2003/q2/319 (“Boubalos”), (Jun. 5, 2003).
- Chaudet, C., et al., “Optimal Positioning of Active and Passive Monitoring Devices”, International Conference on Emerging Networking Experiments and Technologies, Proceedings of the 2005 ACM Conference on Emerging Network Experiment and Technology, CoNEXT '05, Toulousse, France, (Oct. 2005), pp. 71-82.
- Chen, P. M. and Noble, B. D., “When Virtual is Better Than Real, Department of Electrical Engineering and Computer Science”, University of Michigan (“Chen”) (2001).
- Cisco “Intrusion Prevention for the Cisco ASA 5500-x Series” Data Sheet (2012).
- Cohen, M.I. , “PyFlag—An advanced network forensic framework”, Digital investigation 5, Elsevier, (2008), pp. S112-5120.
- Costa, M. , et al., “Vigilante: End-to-End Containment of Internet Worms”, SOSP '05, Association for Computing Machinery, Inc., Brighton U.K., (Oct. 23-26, 2005).
- Didier Stevens, “Malicious PDF Documents Explained”, Security & Privacy, IEEE, IEEE Service Center, Los Alamitos, CA, US, vol. 9, No. 1, Jan. 1, 2011, pp. 80-82, XP011329453, Issn: 1540-7993, DOI: 10.1109/MSP.2011.14.
- Distler, “Malware Analysis: An Introduction”, SANS Institute InfoSec Reading Room, SANS Institute, (2007).
- Dunlap, George W., et al., “ReVirt: Enabling Intrusion Analysis through Virtual-Machine Logging and Replay”, Proceeding of the 5th Symposium on Operating Systems Design and Implementation, USENIX Association, (“Dunlap”), (Dec. 9, 2002).
- FireEye Malware Analysis & Exchange Network, Malware Protection System, FireEye Inc., 2010.
- FireEye Malware Analysis, Modern Malware Forensics, FireEye Inc., 2010.
- FireEye v.6.0 Security Target, pp. 1-35, Version 1.1, FireEye Inc., May 2011.
- Goel, et al., Reconstructing System State for Intrusion Analysis, Apr. 2008 SIGOPS Operating Systems Review, vol. 42 Issue 3, pp. 21-28.
- Gregg Keizer: “Microsoft's HoneyMonkeys Show Patching Windows Works”, Aug. 8, 2005, XP055143386, Retrieved from the Internet: URL:http://www.informationweek.com/microsofts-honeymonkeys-show-patching-windows-works/d/d-id/1035069? [retrieved on Jun. 1, 2016].
- Heng Yin et al, Panorama: Capturing System-Wide Information Flow for Malware Detection and Analysis, Research Showcase @ CMU, Carnegie Mellon University, 2007.
- Hiroshi Shinotsuka, Malware Authors Using New Techniques to Evade Automated Threat Analysis Systems, Oct. 26, 2012, http://www.symantec.com/connect/blogs/, pp. 1-4.
- Idika et al., A-Survey-of-Malware-Detection-Techniques, Feb. 2, 2007, Department of Computer Science, Purdue University.
- Isohara, Takamasa, Keisuke Takemori, and Ayumu Kubota. “Kernel-based behavior analysis for android malware detection.” Computational intelligence and Security (CIS), 2011 Seventh International Conference on. IEEE, 2011.
- Kaeo, Merike , “Designing Network Security”, (“Kaeo”), (Nov. 2003).
- Kevin A Roundy et al: “Hybrid Analysis and Control of Malware”, Sep. 15, 2010, Recent Advances in Intrusion Detection, Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 317-338, XP019150454 ISBN:978-3-642-15511-6.
- Khaled Salah et al: “Using Cloud Computing to Implement a Security Overlay Network”, Security & Privacy, IEEE, IEEE Service Center, Los Alamitos, CA, US, vol. 11, No. 1, Jan. 1, 2013 (Jan. 1, 2013).
- Kim, H. , et al., “Autograph: Toward Automated, Distributed Worm Signature Detection”, Proceedings of the 13th Usenix Security Symposium (Security 2004), San Diego, (Aug. 2004), pp. 271-286.
- King, Samuel T., et al., “Operating System Support for Virtual Machines”, (“King”), (2003).
- Kreibich, C. , et al., “Honeycomb-Creating Intrusion Detection Signatures Using Honeypots”, 2nd Workshop on Hot Topics in Networks (HotNets-11), Boston, USA, (2003).
- Kristoff, J. , “Botnets, Detection and Mitigation: DNS-Based Techniques”, NU Security Day, (2005), 23 pages.
- Lastline Labs, The Threat of Evasive Malware, Feb. 25, 2013, Lastline Labs, pp. 1-8.
- Li et al., A VMM-Based System Call Interposition Framework for Program Monitoring, Dec. 2010, IEEE 16th International Conference on Parallel and Distributed Systems, pp. 706-711.
- Lindorfer, Martina, Clemens Kolbitsch, and Paolo Milani Comparetti. “Detecting environment-sensitive malware.” Recent Advances in Intrusion Detection. Springer Berlin Heidelberg, 2011.
- Marchette, David J., “Computer Intrusion Detection and Network Monitoring: A Statistical Viewpoint”, (“Marchette”), (2001).
- Moore, D., et al., “Internet Quarantine: Requirements for Containing Self-Propagating Code”, INFOCOM, vol. 3, (Mar. 30-Apr. 3, 2003), pp. 1901-1910.
- Morales, Jose A., et al., ““Analyzing and exploiting network behaviors of malware.””, Security and Privacy in Communication Networks. Springer Berlin Heidelberg, 2010. 20-34.
- Mori, Detecting Unknown Computer Viruses, 2004, Springer-Verlag Berlin Heidelberg.
- Natvig, Kurt, “SANDBOXII: Internet”, Virus Bulletin Conference, (“Natvig”), (Sep. 2002).
- NetBIOS Working Group. Protocol Standard fora NetBIOS Service on a TCP/UDP transport: Concepts and Methods. STD 19, RFC 1001, Mar. 1987.
- Newsome, J. , et al., “Dynamic Taint Analysis for Automatic Detection, Analysis, and Signature Generation of Exploits on Commodity Software”, In Proceedings of the 12th Annual Network and Distributed System Security, Symposium (NDSS '05), (Feb. 2005).
- Nojiri, D. , et al., “Cooperation Response Strategies for Large Scale Attack Mitigation”, DARPA Information Survivability Conference and Exposition, vol. 1, (Apr. 22-24, 2003), pp. 293-302.
- Oberheide et al., CloudAV.sub.--N-Version Antivirus in the Network Cloud, 17th USENIX Security Symposium USENIX Security '8 Jul. 28-Aug. 1, 2008 San Jose, CA.
- Reiner Sailer, Enriquillo Valdez, Trent Jaeger, Roonald Perez, Leendert van Doom, John Linwood Griffin, Stefan Berger., sHype: Secure Hypervisor Appraoch to Trusted Virtualized Systems (Feb. 2, 2005) (“Sailer”).
- Silicon Defense, “Worm Containment in the Internal Network”, (Mar. 2003), pp. 1-25.
- Singh, S. , et al., “Automated Worm Fingerprinting”, Proceedings of the ACM/USENIX Symposium on Operating System Design and Implementation, San Francisco, California, (Dec. 2004).
- Thomas H. Ptacek, and Timothy N. Newsham , “Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection”, Secure Networks, (“Ptacek”), (Jan. 1998).
Type: Grant
Filed: Mar 14, 2019
Date of Patent: Apr 26, 2022
Assignee: FireEye Security Holdings, Inc. (Milpitas, CA)
Inventors: Japneet Singh (Bangalore), Ratnesh Pandey (Allahabad), Atul Kabra (Bangalore)
Primary Examiner: Jeffrey C Pwu
Assistant Examiner: Nega Woldemariam
Application Number: 16/353,984
International Classification: H04L 9/32 (20060101); G06F 21/55 (20130101); G06F 21/56 (20130101); G06F 21/60 (20130101); H04L 29/06 (20060101); G06F 21/62 (20130101);