Virtual system and method with threat protection

- FireEye, Inc.

A computing device is described that comprises one or more hardware processors and a memory communicatively coupled to the one or more hardware processors. The memory comprises software that supports a software virtualization architecture, including (i) a virtual machine operating in a guest environment and including a process that is configured to monitor behaviors of data under analysis within the virtual machine and (ii) a threat protection component operating in a host environment. The threat protection component is configured to classify the data under analysis as malicious or non-malicious based on the monitored behaviors.

Skip to: Description  ·  Claims  ·  References Cited  · Patent History  ·  Patent History
Description
CROSS REFERENCE TO RELATED APPLICATIONS

This application is based upon and claims the benefit of priority from U.S. Provisional Patent Application No. 62/187,100 filed Jun. 30, 2015, the entire contents of which are incorporated herein by reference.

FIELD

Embodiments of the disclosure relate to the field of malware detection. More specifically, one embodiment of the disclosure relates to a hypervisor-based, malware detection architecture with OS-specific threat protection positioned within the host environment.

GENERAL BACKGROUND

In general, virtualization is a technique for hosting different guest operating systems concurrently on the same computing platform. With the emergence of hardware support for full virtualization in an increased number of hardware processor architectures, new virtualization software architectures have emerged. One such virtualization architecture involves adding a software abstraction layer, sometimes referred to as a virtualization layer, between the physical hardware and a virtual machine (referred to as “VM”).

A VM is a software abstraction that operates like a physical (real) computing device having a particular operating system. A VM typically features pass-through physical and/or emulated virtual system hardware, and guest system software. The virtual system hardware is implemented by software components in the host (e.g., virtual central processing unit “vCPU” or virtual disk) that are configured to operate in a similar manner as corresponding physical components (e.g., physical CPU or hard disk). The guest system software, when executed, controls execution and allocation of virtual resources so that the VM operates in manner consistent to operations of the physical computing device. As a result, the virtualization software architecture allows for a computing device, which may be running one type of “host” operating system (OS), to support a VM that operates like another computing device that is running another OS type.

In some malware detection systems, malware detection components and malware classification components are deployed as part of the same binary (executable). This poses significant issues. Firstly, this deployment offers no safeguards for detecting whether any portion of the malware detection component or malware classification component has become infected with malware. Secondly, when placed in the same binary, the malware detection component is not logically isolated from a malware classification component. As a result, remediation of injected malicious code that has migrated into the malware detection component would also require analysis of the integrity of the malware classification component. This increases the overall amount of time (and cost) for remediation.

Lastly, an attacker may often exploit a user process and then proceed to attack the guest OS kernel. Advanced attackers may attack the guest OS kernel directly. In some cases, threat protection in the guest user mode (e.g., guest process) or the guest kernel mode (e.g., inside the guest OS kernel) is not secure because the attacker may operate at the same privilege level as threat protection logic. Another threat protection component is needed to enhance security of a computing device.

BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments of the disclosure are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:

FIG. 1A is an exemplary block diagram of a system network that may be utilized by a computing device configured to support virtualization with enhanced security.

FIG. 1B is a high-level exemplary block diagram of a logical representation of the computing device of FIG. 1A.

FIG. 2 is an exemplary block diagram of a physical representation of the endpoint device of FIG. 1B.

FIG. 3 is an exemplary embodiment of the virtualized architecture of the endpoint device of FIG. 1B.

FIG. 4 is an exemplary block diagram of a physical representation of a malware detection system (MDS) appliance of FIG. 1B.

FIG. 5 is an exemplary embodiment of the virtualized architecture of the MDS appliance of FIG. 1B.

FIG. 6 is a flowchart of the operations associated with a virtualized environment of a computing device with threat protection functionality deployed within the host environment.

FIG. 7 is a flowchart of exemplary operations of the virtualization software architecture pertaining to a computing device with threat protection functionality deployed within the host environment.

DETAILED DESCRIPTION

Various embodiments of the disclosure are directed to an improved architecture for detecting and classifying malware within a computing device, especially computing devices that support virtualization. For an endpoint device, the virtualization architecture features a guest environment and a host environment. The guest environment includes one or more virtual machines (VMs) controlled by a guest operating system (OS). The host environment includes a threat detection system that comprises a guest monitor process and a threat protection process. The guest monitor process is configured to manage execution of the guest OS within the VM and receives information from the virtualization hardware about monitored events in the guest OS for processing within the host environment. Additionally, the guest agent process is configured to monitor, perhaps on a continuous basis, for the presence of malicious activity inside a virtual machine. The presence of malicious activity may be detected from events that occur inside the virtual machine during its execution. The threat protection process is configured, based on receipt of events detected during VM execution, to classify the object (or event) under analysis within the VM as benign, suspicious, or malicious.

More specifically, the threat detection system is deployed within memory of the computing device as part of a virtualization layer. The virtualization layer is a logical representation of at least a portion of the host environment, which includes a light-weight hypervisor (sometimes referred herein as a “micro-hypervisor”) operating at highest privilege level (e.g., ring “0”). In general, the micro-hypervisor operates in a manner similar to a host kernel. The host environment further includes a plurality of software components, which generally operate as user-level virtual machine monitors (VMMs) by providing host functionality but operating at a lower privilege level (e.g. privilege ring “3”) than the micro-hypervisor.

For this architecture, according to one embodiment of the disclosure, a software component (sometimes referred to as a “guest agent”) may be instrumented as part of or for operation in conjunction with an application running in the VM or the guest OS (e.g., guest OS kernel). While an object is virtually processed within the VM, a guest agent process monitors events that are occurring and stores metadata associated with these events. The metadata includes information that enhances the understanding of a particular operation being conducted and particular aspects of certain data processed during this operation (e.g., origin of the data, relationship with other data, operation type, etc.).

Provided to the threat detection system (e.g. at least guest monitor process and threat protection process) in real-time or subsequently after detection of the events, the metadata assists the threat protection process in (1) determining that anomalous behaviors associated with the detected events received from the guest agent process indicate a presence of malware within the object and (2) classifying the object associated with the detected events as being part of a malicious attack (and perhaps identifying that the object is associated with a particular malware type).

As another embodiment of the disclosure, multiple (two or more) virtual machines may be deployed to operate within a computing device (e.g., MDS appliance) for detecting malware. Each of the virtual machines may be controlled by an operating system (OS) of a same or different OS-type and/or version type. For instance, a first virtual machine may be controlled by a WINDOWS® based OS while a second virtual machine may be controlled by a LINUX® based OS.

Corresponding to the number of VMs, multiple (two or more) threat detection systems may be deployed within the host environment. Each threat detection system operates in concert with a corresponding guest monitor and a corresponding guest agent process for detecting malware. According to one embodiment of the disclosure, one instance of the guest monitor component and one instance of the threat protection component is implemented for each VM.

As described below, a different threat protection process (and perhaps the same or different guest monitor process) may uniquely correspond to one of the multiple VMs. To maintain isolation between software components associated with different processes running in the virtualization software architecture, each software component associated with one of the multiple threat protection processes may be assigned a memory address space that is isolated from and different than the memory address space assigned to the software component of another one of the threat protection processes. Additionally, each software component associated with one of the multiple guest monitor processes may be assigned to a memory address space that is isolated from and different than the memory address space assigned to the software component of another one of the guest monitor processes or another threat protection software component.

I. Terminology

In the following description, certain terminology is used to describe features of the invention. For example, in certain situations, the terms “component” and “logic” are representative of hardware, firmware or software that is configured to perform one or more functions. As hardware, a component (or logic) may include circuitry having data processing or storage functionality. Examples of such circuitry may include, but are not limited or restricted to a hardware processor (e.g., microprocessor with one or more processor cores, a digital signal processor, a programmable gate array, a microcontroller, an application specific integrated circuit “ASIC”, etc.), a semiconductor memory, or combinatorial elements.

A component (or logic) may be software in the form of a process or one or more software modules, such as executable code in the form of an executable application, an API, a subroutine, a function, a procedure, an applet, a servlet, a routine, source code, object code, a shared library/dynamic load library, or one or more instructions. These software modules may be stored in any type of a suitable non-transitory storage medium, or transitory storage medium (e.g., electrical, optical, acoustical or other form of propagated signals such as carrier waves, infrared signals, or digital signals). Examples of non-transitory storage medium may include, but are not limited or restricted to a programmable circuit; semiconductor memory; non-persistent storage such as volatile memory (e.g., any type of random access memory “RAM”); or persistent storage such as non-volatile memory (e.g., read-only memory “ROM”, power-backed RAM, flash memory, phase-change memory, etc.), a solid-state drive, hard disk drive, an optical disc drive, or a portable memory device. As firmware, the executable code may be stored in persistent storage. Upon execution of an instance of a system component or a software module, a “process” performs operations as coded by the software component or software module.

The term “object” generally refers to a collection of data, whether in transit (e.g., over a network) or at rest (e.g., stored), often having a logical structure or organization that enables it to be classified for purposes of analysis for malware. During analysis, for example, the object may exhibit certain expected characteristics (e.g., expected internal content such as bit patterns, data structures, etc.) and, during processing, a set of expected behaviors. The object may also exhibit unexpected characteristics and a set of unexpected behaviors that may offer evidence of the presence of malware and potentially allow the object to be classified as part of a malicious attack.

Examples of objects may include one or more flows or a self-contained element within a flow itself. A “flow” generally refers to related packets that are received, transmitted, or exchanged within a communication session. For convenience, a packet is broadly referred to as a series of bits or bytes having a prescribed format, which may, according to one embodiment, include packets, frames, or cells. Further, an “object” may also refer to individual or a number of packets carrying related payloads, e.g., a single webpage received over a network. Moreover, an object may be a file retrieved from a storage location over an interconnect.

As a self-contained element, the object may be an executable (e.g., an application, program, segment of code, dynamically link library “DLL”, etc.) or a non-executable. Examples of non-executables may include a document (e.g., a Portable Document Format “PDF” document, Microsoft® Office® document, Microsoft® Excel® spreadsheet, etc.), an electronic mail (email), downloaded web page, or the like.

The term “event” should be generally construed as an activity that is conducted by a software component running on the computing device. The event may occur that causes an undesired action to occur, such as overwriting a buffer, disabling a certain protective feature in the guest environment, or a guest OS anomaly such as a guest OS kernel trying to execute from a user page. Generically, an object or event may be referred to as “data under analysis”.

The term “computing device” should be generally construed as electronics with the data processing capability and/or a capability of connecting to any type of network, such as a public network (e.g., Internet), a private network (e.g., a wireless data telecommunication network, a local area network “LAN”, etc.), or a combination of networks. Examples of a computing device may include, but are not limited or restricted to, the following: an endpoint device (e.g., a laptop, a smartphone, a tablet, a desktop computer, a netbook, a medical device, or any general-purpose or special-purpose, user-controlled electronic device configured to support virtualization); a server; a mainframe; a router; or a security appliance that includes any system or subsystem configured to perform functions associated with malware detection and may be communicatively coupled to a network to intercept data routed to or from an endpoint device.

The term “malware” may be broadly construed as information, in the form of software, data, or one or more commands, that are intended to cause an undesired behavior upon execution, where the behavior is deemed to be “undesired” based on customer-specific rules, manufacturer-based rules, and any other type of rules formulated by public opinion or a particular governmental or commercial entity. This undesired behavior may operate as an exploit or may feature a communication-based anomaly or an execution-based anomaly that would (1) alter the functionality of an electronic device executing an application software in a malicious manner; (2) alter the functionality of an electronic device executing that application software without any malicious intent; and/or (3) provide an unwanted functionality which is generally acceptable in other context.

The term “interconnect” may be construed as a physical or logical communication path between two or more computing platforms. For instance, the communication path may include wired and/or wireless transmission mediums. Examples of wired and/or wireless transmission mediums may include electrical wiring, optical fiber, cable, bus trace, a radio unit that supports radio frequency (RF) signaling, or any other wired/wireless signal transfer mechanism.

The term “computerized” generally represents that any corresponding operations are conducted by hardware in combination with software and/or firmware. Also, the term “agent” should be interpreted as a software component that instantiates a process running in a virtual machine. The agent may be instrumented into part of an operating system (e.g., guest OS) or part of an application (e.g., guest software application). The agent is configured to provide metadata to a portion of the virtualization layer, namely software that virtualizes certain functionality supported by the computing device.

Lastly, the terms “or” and “and/or” as used herein are to be interpreted as inclusive or meaning any one or any combination. Therefore, “A, B or C” or “A, B and/or C” mean “any of the following: A; B; C; A and B; A and C; B and C; A, B and C.” An exception to this definition will occur only when a combination of elements, functions, steps or acts are in some way inherently mutually exclusive.

II. General Network Architecture

Referring to FIG. 1A, an exemplary block diagram of a system network 100 that may be utilized by a computing device that is configured to support virtualization is described herein. The system network 100 may be organized as a plurality of networks, such as a public network 110 and/or a private network 120 (e.g., an organization or enterprise network). According to this embodiment of system network 100, the public network 110 and the private network 120 are communicatively coupled via network interconnects 130 and intermediary computing devices 1401, such as network switches, routers and/or one or more malware detection system (MDS) appliances (e.g., intermediary computing device 1402) as described in co-pending U.S. Patent Application entitled “Microvisor-Based Malware Detection Appliance Architecture” (U.S. patent application Ser. No. 14/962,497), the entire contents of which are incorporated herein by reference. The network interconnects 130 and intermediary computing devices 1401, inter alia, provide connectivity between the private network 120 and a computing device 1403, which may be operating as an endpoint device for example.

The computing devices 140i (i=1,2,3) illustratively communicate by exchanging messages (e.g., packets or data in a prescribed format) according to a predefined set of protocols, such as the Transmission Control Protocol/Internet Protocol (TCP/IP). However, it should be noted that other protocols, such as the HyperText Transfer Protocol Secure (HTTPS) for example, may be used with the inventive aspects described herein. In the case of private network 120, the intermediary computing device 1401 may include a firewall or other computing device configured to limit or block certain network traffic in an attempt to protect the endpoint devices 1403 from unauthorized users.

As illustrated in FIG. 1B in greater detail, a computing device features a virtualization software architecture 150 that comprises a guest environment 160 and a host environment 180. As shown, the guest environment 160 comprises at least one virtual machine 170 (referred to herein as “VM 170”). Certain components operating within the VM 170, which is sometimes referred to as a “guest agent” 175, may be configured to monitor and store metadata (e.g., state information, memory accesses, process names, etc.) and subsequently provide the metadata to a virtualization layer 185 deployed within the host environment 180.

The virtualization layer 185 features a micro-hypervisor with access to physical hardware 190 and one or more host applications with processes running in the user space (not shown). Some of the processes, operating in concert with the guest agent 175, are responsible for determining, based on results from static, heuristic and dynamic analysis of an object in the VM 170, whether the object should be classified as malware or not. Additionally, the guest agent 175, when configured to monitor operations within the guest operating system (OS) as well as operate in concert with malware distribution modules or drivers in the guest OS kernel, may determine whether an event in the guest OS should be classified as malicious or not. Thereafter, the classification results would be subsequently reported to the intermediary network device 1402 or another computing device (e.g., an endpoint device controlled by a network administrator).

It is noted that the virtualization software architecture of the MDS appliance 1402 of FIG. 1A, which is not shown in detail but represented by an ellipsis, features a guest environment and a host environment as described above. One difference, however, is that the guest environment 160 comprises a plurality of virtual machines, where each virtual machine (VM) executes a guest operating system (OS) kernel of a different OS type, which is responsible for controlling emulation or pass-through hardware of the VM 170. For example, a first VM may feature a guest (WINDOWS®) OS kernel while a second VM may feature a guest (LINUX®) OS kernel. As another example, the first VM may feature a guest (WINDOWS® version “x”) OS kernel while the second VM may feature a guest (WINDOWS® version “y”) OS kernel, but WINDOWS® OS version “y” features major code changes from WINDOWS® OS version “x”.

As deployed within the MDS appliance 1402, the virtualization layer 185 features a micro-hypervisor with access to the physical hardware 190 and one or more host applications running in the user space as described above. However, some of the host applications, namely the guest monitor component and the threat protection component, are implemented with multiple versions so that there exists a threat protection process (and perhaps a guest monitor process) for each OS type or each OS instance. For example, where a first VM features a guest (WINDOWS®) OS kernel and a second VM features a guest (LINUX®) OS kernel, the host environment may be implemented with (1) a first threat detection system (e.g., first guest monitor process and first threat protection process) that is responsible for classifying data under analysis (object/event) within the first VM and (2) a second threat detection system (e.g., second guest monitor process and second threat protection process) that is responsible for classifying data under analysis (object or event) within the second VM. The second threat protection process is different than the first threat protection process.

III. General Endpoint Architecture

Referring now to FIG. 2, an exemplary block diagram of a physical representation of the endpoint device 1403 that supports virtualization is shown. Herein, the endpoint device 1403 illustratively includes at least one hardware processor 210, a memory 220, one or more network interfaces (referred to as “network interface(s)”) 230, and one or more network devices (referred to as “network device(s)”) 240 connected by a system interconnect 250, such as a bus. These components are at least partially encased in a housing 200, which is made entirely or partially of a rigid material (e.g., hardened plastic, metal, glass, composite, or any combination thereof) that protects these components from atmospheric conditions.

The hardware processor 210 is a multipurpose, programmable device that accepts digital data as input, processes the input data according to instructions stored in its memory, and provides results as output. One example of the hardware processor 210 may include an Intel® x86 central processing unit (CPU) with an instruction set architecture. Alternatively, the hardware processor 210 may include another type of CPU, a digital signal processor (DSP), an ASIC, or the like.

According to one implementation, the hardware processor 210 may include one or more control registers, including a “CR3” control register in accordance with x86 processor architectures. Herein, the CR3 register 212 may be context-switched between host mode and guest mode. Hence, when the hardware processor 210 is executing in guest mode, a pointer value within the CR3 register identifies an address location for active guest page tables, namely guest page tables associated with a currently running process that is under control of the guest OS (e.g., WINDOWS®-based process). The guest page tables are accessed as part of a two-step memory address translation to load/store requested data from/into actual physical memory.

The network device(s) 240 may include various input/output (I/O) or peripheral devices, such as a storage device for example. One type of storage device may include a solid state drive (SSD) embodied as a flash storage device or other non-volatile, solid-state electronic device (e.g., drives based on storage class memory components). Another type of storage device may include a hard disk drive (HDD). Each network interface 230 may include one or more network ports containing the mechanical, electrical and/or signaling circuitry needed to connect the endpoint device 1403 to the network 120 to thereby facilitate communications over the system network 110. To that end, the network interface(s) 230 may be configured to transmit and/or receive messages using a variety of communication protocols including, inter alia, TCP/IP and HTTPS.

The memory 220 may include a plurality of locations that are addressable by the hardware processor 210 and the network interface(s) 230 for storing software (including software applications) and data structures associated with such software. Examples of the stored software include a guest operating system (OS) kernel 301, guest software (applications and/or agent) 320, a micro-hypervisor 360 and host software 370, as shown in FIG. 2.

Herein, the host software 370 may include a component (e.g., instances of user-space applications operating as user-level VMMs) which, when executed, produces processes running in the host environment 180 (sometimes referred to as “hyper-processes”). The different components of the host software 370 are isolated from each other and run in separate (host) address spaces. In communication with the micro-hypervisor 360, the resulting hyper-processes are responsible for controlling operability of the endpoint device 1403, including policy and resource allocation decisions, maintaining logs of monitored events for subsequent analysis, managing virtual machine (VM) execution, and managing malware detection and classification. The management of malware detection and classification may be accomplished through certain host software 370 (e.g., guest monitor and threat protection components).

The micro-hypervisor 360 is disposed or layered beneath the guest OS kernel(s) 300 of the endpoint device 1403 and is the only component that runs in the most privileged processor mode (host mode, ring-0). As part of a trusted computing base of most components in the computing platform, the micro-hypervisor 360 is configured as a light-weight hypervisor (e.g., less than 10K lines of code), thereby avoiding inclusion of potentially exploitable x86 virtualization code.

The micro-hypervisor 360 generally operates as the host kernel that is devoid of policy enforcement; rather, the micro-hypervisor 360 provides a plurality of mechanisms that may be used by the hyper-processes, namely processes produced by execution of certain host software 370 for controlling operability of the virtualization architecture. These mechanisms may be configured to control communications between separate protection domains (e.g., between two different hyper-processes), coordinate thread processing within the hyper-processes and virtual CPU (vCPU) processing within the VM 170, delegate and/or revoke hardware resources, and control interrupt delivery and DMA, as described below.

The guest OS 300, portions of which are resident in memory 220 and executed by the hardware processor 210, functionally organize the endpoint device 1403 by, inter alia, invoking operations in support of guest applications executing on the endpoint device 1403. An exemplary guest OS 300 may include, but are not limited or restricted to the following: (1) a version of a WINDOWS® series of operating system; (2) a version of a MAC OS® or an IOS® series of operating system; (3) a version of a LINUX® operating system; or (4) a versions of an ANDROID® operating system, among others.

Guest software 320 may constitute instances of one or more guest applications running their separate guest address spaces (sometimes referred to as “user mode processes”). Examples of these guest applications may include a Portable Document Format (PDF) reader application such as ADOBE® READER® or a data processing application such as the MICROSOFT® WORD® program. Events (monitored behaviors) of an object that is processed by particular guest software 320 are monitored by a guest agent process instrumented as part of the guest OS 300 or as one of the separate guest application. The guest agent processes provides metadata to at least one of the hyper-processes and the micro-hypervisor 360 for use in malware detection.

IV. Virtualized Architecture—Endpoint Device

Referring now to FIG. 3, an exemplary embodiment of the virtualization software architecture 150 of the endpoint device 1403 with enhanced security of processes and/or components residing in a virtual machine is shown. The virtualization software architecture 150 comprises guest environment 160 and host environment 180, both of which may be configured in accordance with a protection ring architecture as shown. While the protection ring architecture is shown for illustrative purposes, it is contemplated that other architectures that establish hierarchical privilege levels for virtualized software components may be utilized.

A. Guest Environment

The guest environment 160 comprises a virtual machine 170, which includes software components that are configured to (i) determine whether the object 335, which is part of a guest software application or guest OS kernel, may include malware, or (ii) determine whether events that occur during operation of the guest OS are malicious. Herein, as shown, the virtual machine 170 comprises a guest OS 300 that features a guest OS kernel 301 running in the most privileged level (Ring-0 305) along with one or more processes 322, namely instance(s) of one or more guest OS applications and/or one or more instances of guest software application (hereinafter “guest software 320”), is running in a lesser privileged level (Ring-3 325). It is contemplated that malware detection on the endpoint device 1403 may be conducted by one or more processes running in the virtual machine 170. These processes include a static analysis process 330, a heuristics process 332 and a dynamic analysis process 334, which collectively operate to detect suspicious and/or malicious behaviors by the object 335 during execution within the virtual machine 170. Notably, the endpoint device 1403 may perform (implement) malware detection as background processing (i.e., minor use of endpoint resources) with data processing being implemented as its primary processing (e.g., in the foreground having majority use of endpoint resources).

Although not shown, it is contemplated that the object 335, which may include malware, may be processed in the guest OS kernel 301 in lieu of one of the guest processes 322. For instance, the same functionality that is provided by different malware detection processes may also be provided by different malware detection modules or drivers in the guest OS kernel 301. Hence, malware detection within the OS guest kernel 301 and inter-communications with the threat protection 376 may be conducted.

As used herein, the object 335 may include, for example, a web page, email, email attachment, file or universal resource locator. Static analysis may conduct a brief examination of characteristics (internal content) of the object 335 to determine whether it is suspicious, while dynamic analysis may analyze behaviors associated with events that occur during virtual execution of the object 335. One of these anomalous behaviors, for example, may be detected via an extended page table (EPT) violation where the object 335 attempts to access a memory page that is protected by nested page tables (i.e., page tables accessed in the second stage of the two-step memory address translation) to perform an operation that is not permitted (e.g., write data to a page that is “write protected”—namely the page without write “w” permission or execution from a page that is marked as non-executable in the nested page tables). These events are further made available to the threat protection process 376, as described below.

According to one embodiment of the disclosure, the static analysis process 330 and the heuristics process 332 may conduct a first examination of the object 335 to determine whether it is suspicious and/or malicious. The static analysis process 330 and the heuristics process 332 may employ statistical analysis techniques, including the use of vulnerability/exploit signatures and heuristics, to perform non-behavioral analysis in order to detect anomalous characteristics (i.e., suspiciousness and/or maliciousness) without execution (i.e., monitoring run-time behavior) of the object 335. For example, the static analysis process 330 may employ signatures (referred to as vulnerability or exploit “indicators”) to match content (e.g., bit patterns) of the object 335 with patterns of the indicators in order to gather information that may be indicative of suspiciousness and/or maliciousness. The heuristics module 332 may apply rules and/or policies to detect anomalous characteristics of the object 335 in order to identify whether the object 335 is suspect and deserving of further analysis or whether it is non-suspect (i.e., benign) and not in need of further analysis. These statistical analysis techniques may produce static analysis results (e.g., identification of communication protocol anomalies and/or suspect source addresses of known malicious servers) that may be provided to the threat protection process 376 and/or reporting module 336.

More specifically, the static analysis process 330 may be configured to compare a bit pattern of the object 335 content with a “blacklist” of suspicious exploit indicator patterns. For example, a simple indicator check (e.g., hash) against the hashes of the blacklist (i.e., exploit indicators of objects deemed suspicious) may reveal a match, where a score may be subsequently generated (based on the content) by the threat protection process 376 to identify that the object may include malware. In addition to or in the alternative of a blacklist of suspicious objects, bit patterns of the object 335 may be compared with a “whitelist” of permitted bit patterns.

The dynamic analysis process 334 may conduct an analysis of the object 335 during processing by a guest application process 338, where the guest agent process 175 monitors and captures the run-time behaviors of the object 335. The behaviors (events) are stored within a ring buffer 340 of the guest agent 175 for subsequent analysis by the threat protection process, as described below. In an embodiment, the dynamic analysis process 334 may operate concurrently with the static analysis process 330 and/or heuristic process 332 instead of waiting for results from the static analysis process 330 and/or the heuristics process 332. During processing of the object 335, certain events may trigger page table violations that result in a VM Exit to the host environment 180 for further analysis by the threat protection process 376.

For instance, the dynamic analysis process 334 may examine whether any behaviors associated with a detected event that occur during processing of the analyzed object 335 are suspicious and/or malicious. A finding of “suspicious” denotes that the behaviors signify a first probability range of the analyzed object 335 being associated with malware while a finding of “malicious” denotes that the behaviors signify a higher second probability of the analyzed object 335 being associated with malware. The dynamic analysis results (and/or events caused by the processing of the object 335) may also be provided to reporting module 336.

Based on analysis for the threat protection process 376 based on the static analysis results and/or the dynamic analysis results, the reporting module 336 may be configured to generate a report (result data in a particular format) or an alert (message advising of the detection suspicious or malicious events) for transmission to a remotely located computing device, such as MDS appliance 1402 or another type of computing device.

In addition or in lieu of analysis of the object 335, it is contemplated that the presence of a guest OS anomaly, which may be detected by malware detection processes 302 or malware detection modules/drivers 310 in the guest OS kernel 301, may be detected and reported to the host environment 180 (e.g., guest monitor component 374 and/or threat protection component 376) and/or reporting module 336).

1. Guest OS

In general, the guest OS 300 manages certain operability of the virtual machine 170, where some of these operations are directed to the execution and allocation of virtual resources involving network connectivity, memory translation, and interrupt service delivery and handling. More specifically, the guest OS 300 may receive an input/output (I/O) request from the object 335 being processed by one or more guest process(es) 322, and in some cases, translates the I/O request into instructions. These instructions may be used, at least in part, by virtual system hardware (e.g., vCPU 303) to drive one or more network devices, such as a network adapter 304 for establishing communications with other network devices. Upon establishing connectivity with the private network 120 and/or the public network 110 of FIG. 1A and in response to detection of the object 335 (or monitored event) being malicious, the endpoint device 1403 may initiate alert messages 342 via reporting module 336 and the network adapter 304. The alerts may be in any prescribed a message format (e.g., a Short Message Service “SMS” message, Extended Message Service “EMS” message, Multimedia Messaging Service “MMS”, Email, etc.) or any other prescribed wired or wireless transmission format. Additionally, the guest OS 300 may receive software updates 344 from administrators via the private network 120 of FIG. 1A or from a third party provider via the public network 110 of FIG. 1A.

Another operation supported by the guest OS 300 involves the management of guest page tables, which are used as part of the two-step memory address translation where a guest-linear address (GLA) is translated to a guest-physical address (GPA).

Lastly, the guest OS kernel 301 is configured with an Interrupt Service Routine (ISR) 315 that supports one or more different types of interrupts, including network-based interrupts, graphics-based interrupts and kernel services interrupts. Since the guest agent process 175 may be turned off or halted through malicious attack, the kernel services interrupts are invoked by the guest monitor process 374, as described below, to ensure processing of the guest agent process 175 within the VM 170.

Issued by the guest monitor process 374, a kernel services interrupt represents a virtual interrupt that causes the guest OS kernel 301 to conduct a plurality of checks. One of these checks is directed to an analysis of the operating state of the guest agent process 175 (i.e., halted, disabled, in operation, etc.). Another check may involve an evaluation of data structures associated with the guest agent process 175 or other software components within the VM 170 to determine whether such data structures have been tampered. Another check involves an evaluation of the system call table (not shown) to determine if entry points for any of the system calls have been maliciously changed.

2. Guest Agent

According to one embodiment of the disclosure, the guest agent process 175 is based on a software component configured to provide the virtualization layer 185 with metadata that may assist in the handling of malware detection. Instrumented in either a guest application 320, guest OS 300, or operating as a separate module, the guest agent is configured to provide metadata to the virtualization layer 185 in response to at least one selected event.

Herein, the guest agent process 175 utilizes one or more ring buffers 340 (e.g., queue, FIFO, buffer, shared memory, and/or registers), which records certain events that may be considered of interest for malware detection. Examples of these events may include information associated with a newly created process (e.g., process identifier, time of creation, originating source for creation of the new process, etc.), information associated with an access to certain restricted port or memory address, or the like. The recovery of the information associated with the stored events may occur through a “pull” or “push” recovery scheme, where the guest agent process 175 may be configured to download the metadata periodically or aperiodically (e.g., when the ring buffer 340 exceeds a certain storage level or in response to a request). The request may originate from the threat protection process 376 and is generated by the guest monitor process 374.

B. Host Environment

As further shown in FIG. 3, the host environment 170 features a protection ring architecture that is arranged with a privilege hierarchy from the most privileged level 350 (Ring-0) to a lesser privilege level 352 (Ring-3). Positioned at the most privileged level 350 (Ring-0), the micro-hypervisor 360 is configured to directly interact with the physical hardware platform and its resources, such as hardware processor 210 or memory 220 of FIG. 2.

Running on top of the micro-hypervisor 360 in Ring-3 352, a plurality of processes being instances of certain host software 370 (referred to as “hyper-processes”) communicate with the micro-hypervisor 360. Some of these hyper-processes 370 include master controller process 372, guest monitor process 374 and threat protection process 376. Each of these hyper-processes 372, 374 and 376 represents a separate software component with different functionality and is running in a separate address space. As the software components associated with the hyper-processes are isolated from each other (i.e. not in the same binary), inter-process communications between the hyper-processes are handled by the micro-hypervisor 360, but regulated through policy protection by the master controller process 372.

1. Micro-Hypervisor

The micro-hypervisor 360 may be configured as a light-weight hypervisor (e.g., less than 10K lines of code) that operates as a host OS kernel. The micro-hypervisor 360 features logic (mechanisms) for controlling operability of the computing device, such as endpoint device 1403 as shown. The mechanisms include inter-process communication (IPC) logic 362, resource allocation logic 364, scheduling logic 366 and interrupt delivery control logic 368, where all of these mechanisms are based, at least in part, on a plurality of kernel features—protection domains, execution contexts, scheduling contexts, portals, and semaphores (hereinafter collectively as “kernel features 369”) as partially described in a co-pending U.S. Patent Application entitled “Microvisor-Based Malware Detection Endpoint Architecture” (U.S. patent application Ser. No. 14/929,821), the entire contents of which are incorporated herein by reference.

More specifically, a first kernel feature is referred to as “protection domains,” which correspond to containers where certain resources for the hyper-processes can be assigned, such as various data structures (e.g., execution contexts, scheduling contexts, etc.). Given that each hyper-process 370 corresponds to a different protection domain, code and/or data structures associated with a first hyper-process (e.g., master controller process 372) is spatially isolated (within the address space) from a second (different) hyper-process (e.g., guest monitor process 374). Furthermore, code and/or data structures associated with any of the hyper-processes are spatially isolated from the virtual machine 170 as well.

A second kernel feature is referred to as an “execution context,” which features thread level activities within one of the hyper-processes (e.g., master controller process 372). These activities may include, inter alia, (i) contents of hardware registers, (ii) pointers/values on a stack, (iii) a program counter, and/or (iv) allocation of memory via, e.g., memory pages. The execution context is thus a static view of the state of a thread of execution.

Accordingly, the thread executes within a protection domain associated with that hyper-process of which the thread is a part. For the thread to execute on a hardware processor 210, its execution context may be tightly linked to a scheduling context (third kernel feature), which may be configured to provide information for scheduling the execution context for execution on the hardware processor 210. Illustratively, the scheduling context may include a priority and a quantum time for execution of its linked execution context on the hardware processor 210.

Hence, besides the spatial isolation provided by protection domains, the micro-hypervisor 360 enforces temporal separation through the scheduling context, which is used for scheduling the processing of the execution context as described above. Such scheduling by the micro-hypervisor 360 may involve defining which hardware processor may process the execution context, what priority is assigned the execution priority, and the duration of such execution.

Communications between protection domains are governed by portals, which represent a fourth kernel feature that is relied upon for generation of the IPC logic 362. Each portal represents a dedicated entry point into a corresponding protection domain. As a result, if one protection domain creates the portal, another protection domain may be configured to call the portal and establish a cross-domain communication channel.

Lastly, of the kernel features 369, semaphores facilitate synchronization between execution context on the same or on different hardware processors. The micro-hypervisor 360 uses the semaphores to signal the occurrence of hardware interrupts to the user applications.

The micro-hypervisor 360 utilizes one or more of these kernel features to formulate mechanisms for controlling operability of the endpoint device 200. One of these mechanisms is the IPC logic 362, which supports communications between separate protection domains (e.g., between two different hyper-processes). Thus, under the control of the IPC logic 362, in order for a first hyper-process to communicate with another hyper-process, the first hyper-process needs to route a message to the micro-hypervisor 360. In response, the micro-hypervisor 360 switches from a first protection domain (e.g., first hyper-process 372) to a second protection domain (e.g., second hyper-process 374) and copies the message from an address space associated with the first hyper-process 372 to a different address space associated with the second hyper-process 374.

Another mechanism provided by the micro-hypervisor 360 is resource allocation logic 364. The resource allocation logic 364 enables a first software component to share one or more memory pages with a second software component under the control of the micro-hypervisor 360. Being aware of the location of one or more memory pages, the micro-hypervisor 360 provides the protection domain associated with the second software component access to the memory location(s) associated with the one or more memory pages.

Also, the micro-hypervisor 360 contains scheduling logic 366 that, when invoked, selects the highest-priority scheduling context and dispatches the execution context associated with the scheduling context. As a result, the scheduling logic 366 ensures that, at some point in time, all of the software components can run on the hardware processor 210 as defined by the scheduling context. Also, the scheduling logic 366 re-enforces that no software component can monopolize the hardware processor 210 longer than defined by the scheduling context.

Lastly, the micro-hypervisor 360 contains an interrupt delivery control logic 368 that, when driven by the micro-hypervisor 360, any interrupts that occur are also delivered to the micro-hypervisor 360.

2. Master Controller

Referring still to FIG. 3, generally operating as a root task, the master controller process 372 is responsible for enforcing policy rules directed to operations of the virtualization software architecture 150. This responsibility is in contrast to the micro-hypervisor 360, which provides mechanisms for inter-process communications and resource allocation, but does not dictate how and when such functions occur. For instance, the master controller process 372 may be configured to conduct a number of policy decisions, including some or all of the following: (1) memory allocation (e.g., distinct physical address space assigned to different software components); (2) execution time allotment (e.g., scheduling and duration of execution time allotted on a selected granular basis process basis); (3) virtual machine creation (e.g., number of VMs, OS type, etc.); and/or (4) inter-process communications (e.g., which processes are permitted to communicate with which processes, etc.).

Additionally, the master controller process 372 is responsible for the allocation of resources. Initially, the master controller process 372 receives access to most of the physical resources, except for access to security critical resources that should be driven by high privileged (Ring-0) components, not user space (Ring-3) software components such as hyper-processes. For instance, while precluded from access to the memory management unit (MMU) or the interrupt controller, the master controller process 372 may be configured to control the selection of which software components are responsible for driving certain network devices.

The master controller process 372 is platform agnostic. Thus, the master controller process 372 may be configured to enumerate what hardware is available to a particular process (or software component) and to configure the state of the hardware (e.g., activate, place into sleep state, etc.).

By separating the master controller process 372 from the micro-hypervisor 360, a number of benefits are achieved. One inherent benefit is increased security. When the functionality is placed into a single binary, which is running in host mode, any vulnerability may place the entire computing device at risk. In contrast, each of the software components within the host mode is running in its own separate address space.

3. Guest Monitor

Referring still to FIG. 3, the guest monitor process 374 is a running instance of a user space application that is responsible for managing the execution of the virtual machine 170, which includes operating in concert with the threat protection process 376 to determine whether or not certain events, detected by the guest monitor process 374 during processing of the object 335 within the VM 170, are malicious. As an example, in response to an extended page table (EPT) violation, the virtualization hardware causes a VM Exit to the virtualization layer 185. The guest monitor process 374 identifies the EPT violation as an unpermitted attempt in accessing a memory page associated with the nested page table. The occurrence of the VM Exit may prompt the guest monitor process 374 to obtain and forward metadata associated with the EPT violation (as monitored by the guest agent 175) to the threat protection process 376. Based on the metadata, the threat protection process 376 determines if the event was suspicious, malicious or benign.

As an illustrative example, it is noted that there are certain events that cause a transition of control flow from the guest mode to the host mode. The guest monitor process 374 can configure, on an event basis, which events should trigger a transition from the guest mode to the host mode. One event may involve the execution of a privileged processor instruction by the vCPU 303 within the virtual machine 170. In response to execution by the vCPU 303 of a privileged instruction, the micro-hypervisor 360 gains execution control of the endpoint device 1403 and generates a message to the guest monitor process 374, which is responsible for handling the event.

The guest monitor process 374 also manages permissions of the nested page tables under control of the virtualization layer 185. More specifically, the micro-hypervisor 360 includes a mechanism (i.e. paging control logic 365) to populate the nested page tables. The guest monitor process 374 features permission adjustment logic 375 that alters the page permissions. One technique in altering the page permissions may involve selecting a particular nested page table among multiple nested page tables, which provides the same memory address translation but is set with page permissions for the targeted memory pages that differ from page permissions for other nested page tables. Some of the functionality of the permission adjustment logic 375 may be based, at least in part, on functionality within paging control logic 365 that is accessible via an API (not shown).

The guest monitor process 374 also includes interrupt logic 377, which is responsible for injecting virtual interrupts to the ISR agent 315 within the guest OS kernel 301. The virtual interrupts are intended for the ISR agent 315 to assume control over certain operations of the virtual machine 170.

4. Threat Protection Component

As described above and shown in FIG. 3, detection of a suspicious and/or malicious object 335 may be performed by static and dynamic analysis of the object 335 within the virtual machine 170. Events associated with the process are monitored and stored by the guest agent process 175. Operating in concert with the guest agent process 175, the threat protection process 376 is responsible for further malware detection on the endpoint device 1403 based on an analysis of events received from the guest agent process 175 running in the virtual machine 170. It is contemplated, however, that detection of suspicious/malicious activity may also be conducted completely outside the guest environment 160, such as solely within the threat protection logic 376 of the host environment 180. The threat protection logic 376 may rely on an interaction with the guest agent process 175 when it needs to receive semantic information from inside the guest OS that the host environment 180 could not otherwise obtain itself. Examples of semantic information may identify whether malicious activity conducted within a certain page of memory is associated with a particular file, segment of code, or other another data type.

After analysis, the detected events are correlated and classified as benign (i.e., determination of the analyzed object 335 being malicious is less than a first level of probability); suspicious (i.e., determination of the analyzed object 335 being malicious is between the first level and a second level of probability); or malicious (i.e., determination of the analyzed object 335 being malicious is greater than the second level of probability). The correlation and classification operations may be accomplished by a behavioral analysis logic 380 and a classifier 385. The behavioral analysis logic 380 and classifier 385 may cooperate to analyze and classify certain observed behaviors of the object (based on events) as indicative of malware. In particular, the observed run-time behaviors by the guest agent 175 are provided to the behavioral analysis logic 380 as dynamic analysis results. These events may include metadata and other information associated with an EPT violation or any other VM Exit. Additionally, static analysis results and dynamic analysis results may be provided to the threat protection process 376 via the guest monitor process 374.

More specifically, the static analysis results and dynamic analysis results may be stored in memory 220, along with any additional metadata from the guest agent process 175. These results may be provided via coordinated IPC-based communication to the behavioral analysis logic 380. Alternatively, the results and/or events may be provided or reported via the network adapter 304 for the transmission to the MDS appliance 1402 for correlation. The behavioral analysis logic 380 may be embodied as rules-based correlation logic illustratively executing as an isolated process (software component) that communicates with the guest environment 160 via the guest monitor process 374.

In an embodiment, the behavioral analysis logic 380 may be configured to operate on correlation rules that define, among other things, patterns (e.g., sequences) of known malicious events (if-then statements with respect to, e.g., attempts by a process to change memory in a certain way that is known to be malicious). The events may collectively correlate to malicious behavior. The rules of the behavioral analysis logic 380 may then be correlated against those dynamic analysis results, as well as static analysis results, to generate correlation information pertaining to, for example, a level of risk or a numerical score used to arrive at a decision of maliciousness.

The classifier 385 may be configured to use the correlation information provided by behavioral analysis logic 380 to render a decision as to whether the object 335 (or monitored events from a guest process 320 or guest OS kernel 301). Illustratively, the classifier 385 may be configured to classify the correlation information, including monitored behaviors (expected and unexpected/anomalous) and access violations relative to those of known malware and benign content.

Periodically, via the guest OS kernel 301, rules may be pushed from the MDS appliance 1402 or another computing device to the endpoint 1403 to update the behavioral analysis logic 380, where the rules may be embodied as different (updated) behaviors to monitor. For example, the correlation rules pushed to the behavioral analysis logic 380 may include, e.g., whether a running process or application program has spawned processes, requests to use certain network ports that are not ordinarily used by the application program, and/or attempts to access data in memory locations not allocated to the guest application running the object. Alternatively, the correlation rules may be pulled based on a request from an endpoint device 1403 to determine whether new rules are available, and in response, the new rules are downloaded.

Illustratively, the behavioral analysis logic 380 and classifier 385 may be implemented as separate modules although, in the alternative, the behavioral analysis logic 380 and classifier 385 may be implemented as a single module disposed over (i.e., running on top of) the micro-hypervisor 360. The behavioral analysis logic 380 may be configured to correlate observed behaviors (e.g., results of static and dynamic analysis) with known malware and/or benign objects (embodied as defined rules) and generate an output (e.g., a level of risk or a numerical score associated with an object) that is provided to and used by the classifier 385 to render a decision of malware based on the risk level or score exceeding a probability threshold. The reporting module 336, which execute as a user mode process (perhaps within the guest OS kernel 301), is configured to generate an alert for transmission external to the endpoint device 1403 (e.g., to one or more other endpoint devices, a management appliance, or MDS appliance 1402) in accordance with “post-solution” activity.

V. General MDS Appliance Architecture

Referring now to FIG. 4, an exemplary block diagram of a physical representation of the MDS appliance 1402 that supports virtualization is shown. Herein, the MDS appliance 1402 illustratively includes one or more hardware processors 410 (referred to as “hardware processor(s)”), a memory 420, one or more network interfaces (referred to as “network interface(s)”) 430, and one or more network devices (referred to as “network device(s)”) 440 connected by a system interconnect 450, such as a bus. These components are at least partially encased in a housing 400, which is made entirely or partially of a rigid material (e.g., hardened plastic, metal, glass, composite, or any combination thereof) that protects these components from atmospheric conditions.

Herein, the network interface(s) 430 may feature a digital tap that is configured to copy and/or re-route data received over a network to the MDS appliance 1402. The network interface 430 may be implemented as part of the MDS appliance 1402 or as a separate standalone component operating in conjunction with the MDS appliance 1402. Alternatively, the MDS appliance 1402 may be placed in-line to process received data in real-time. It is contemplated that the MDS appliance 1402 may be a dedicated hardware device or dedicated software operating within a multi-operational device. Alternatively, the MDS appliance 1402 may be piggybacked on another network device like a firewall, switch, or gateway. As shown, the memory 420 stores a plurality of OS kernels 5001-500N, (hereinafter referred to as “guest OS kernel(s)”) and additional host software 570.

Herein, the host software 570 may include instances of user-space applications operating as user-level VMMs which, when executed, produce processes referred to as “hyper-processes. The different host software 570 is isolated from each other and run on separate physical address spaces. In communication with the micro-hypervisor 560, the resulting hyper-processes are responsible for controlling operability of the MDS appliance 1402, including managing malware detection and classification. Such management of malware detection and classification may be accomplished through multiple hyper-processes, each of these hyper-processes operating as a threat detection system that is responsible for malware detection and classification for a particular OS-type (e.g., features corresponding guest monitor and threat protection processes for each particular guest OS kernel).

The guest OSes 5001-500N (N>1), portions of which are resident in memory 520 and executed by the hardware processor 510, functionally organize the corresponding VM 1701-170N by, inter alia, invoking operations in support of guest applications executing within the VMs 1701-170N. Examples of different OS types 5001-500N may include, but are not limited or restricted to any of the following: (1) a version of the WINDOWS® operating system; (2) a version of the MAC OS® or IOS® operating systems; (3) a version of the LINUX® operating system; and/or (4) a version of the ANDROID® operating system.

VI. Virtualized Architecture—MDS Appliance

Referring now to FIG. 5, an exemplary embodiment of an improved virtualization software architecture 150 to detect and classify malware within the MDS appliance 1402 is shown. The virtualization software architecture 150 comprises guest environment 160 and host environment 180, both of which may be configured in accordance with a protection ring architecture as shown. While the protection ring architecture is shown for illustrative purposes, it is contemplated that other architectures that establish hierarchical privilege levels for virtualized software components may be utilized.

A. Guest Environment

As shown, the guest environment 160 comprises multiple (two or more) virtual machines 1701-170N (N>1), where a single virtual machine (VM) (e.g., VM 1701) or multiple (two or more) VMs (e.g., VMs 1701-1702) may analyzes an object 335 for the presence of malware. As shown, a first virtual machine (VM) 1701 features a guest OS kernel 5001 of a first OS type (e.g., WINDOWS® OS) that is running in the most privileged level (Ring-0 505) along with one or more processes 522 some of which are instances of guest software 520 (hereinafter “guest application process(es)”) that are running in a lesser privileged level (Ring-3 525).

As further shown, a second VM 1702 features a guest OS kernel 5002 of a second OS type (e.g., LINUX® OS) that is running in the most privileged level (Ring-0 506). Similar to process(es) 522 running on the first VM 1701, one or more processes 523 are running in a lesser privileged level (Ring-3 526) of the second VM 1702.

It is contemplated that malware detection on the MDS appliance 1402 may be conducted by one or more processes 522 running as part of the first VM 1701 and/or one or more processes 523 running as part of the second VM 1702. These processes 522 and 523 may operate in a similar manner, as described herein.

1. Guest OS

In general, the guest OS 5001 manages operability of the first VM 1701, where some of these operations involve network connectivity, memory translation and interrupt service delivery and handling of these incoming service requests. More specifically, the guest OS 5001 may receive an input/output (I/O) request from the object 335 being processed by one or more guest software process(es) 522, and in some cases, translates the I/O request into instructions. These instructions may be used, at least in part, by virtual system hardware (e.g., vCPU) to drive one or more network devices, such as a virtual network adapter (e.g., virtual network interface card “vNIC”), for establishing communications with other network devices. Upon establishing connectivity with the private network 120 and/or the public network 110 of FIG. 1A, the MDS appliance 1402 may initiate alert messages via a reporting module 536 and the NIC 502 in response to detection that the object 335 is malicious. Additionally, with network connectivity, the guest OS 5001 may receive software updates from administrators via the private network 120 of FIG. 1A or from a third party provider via the public network 110 of FIG. 1A.

Similarly, the guest OS kernel 5002 manages operability of the second VM 1702 similar to the management of the first VM 170, by the OS guest kernel 5001.

2. Guest Agent

According to one embodiment of the disclosure, the guest agent process 175 is based on a software component configured to provide the virtualization layer 185 with metadata that may assist in the handling of malware detection. Instrumented into guest software 520, guest OS kernel 5001 or operating as a separate module as shown, the guest agent process 175 is configured to provide metadata to the virtualization layer 185 in response to at least one selected event. For the second VM 1702, it is contemplated that another guest agent process (not shown) will operate in a similar fashion.

Each of the guest agent processes may comprise one or more ring buffers, which records certain events that may be considered of interest for malware detection, as described above. The recovery of the information associated with the stored events may be provided to a corresponding threat detection system 5741-574N (e.g. guest monitor process 5761 from the first VM 1701 or guest monitor process 5762 from the second VM 1702). For instance, the guest agent process 175 may be configured to download the metadata periodically or aperiodically (e.g., when the ring buffer 540 exceeds a certain storage level or in response to a request) to the guest monitor process 5761 for routing to the threat protection process 5781. Likewise, the guest agent (not shown) operating in the second VM 1702 may periodically or aperiodically download metadata to the guest monitor process 3762 for routing to the threat protection process 3782.

B. Host Environment

As further shown in FIG. 5, the host environment 170 features a protection ring architecture that is arranged with a privilege hierarchy from the most privileged level 550 (Ring-0) to a lesser privilege level 552 (Ring-3). Positioned at the most privileged level 550 (Ring-0) and interacting directly with the physical resources devices, the micro-hypervisor 560 is configured to directly interact with the physical hardware, such as hardware processor 410 or memory 420 of FIG. 4.

Running on top of the micro-hypervisor 560 in Ring-3 552, a plurality of processes being instances of certain host software 570 communicate with the micro-hypervisor 560. Some of these hyper-processes 570 include master controller component 572, and one or more threat detection systems 5741-574N. The number of threat detection systems 5741-574N corresponding to the number of virtual machine 1701-170N for a one-to-one correspondence.

Each threat detection systems 5741-574N comprises guest monitor component 5761-576N and threat protection component 5781-578N. According to one embodiment, each of the master controller component 572, the guest monitor components 5761-576N and the threat protection components 5781-578N are isolated from each other (i.e. separate software components with different functionality and running in a separate address space). It is contemplated, however, that the guest monitor components 5761-576N may be isolated from the master controller component 572 and the threat protection components 5781-578N, but the guest monitor components 5761-576N share the same binary and may not be isolated from each other.

1. Micro-Hypervisor

The micro-hypervisor 560 may be configured as a light-weight hypervisor that operates as a host OS kernel and similar to micro-hypervisor 360 of FIG. 3. As described above, the micro-hypervisor 560 features logic (mechanisms) for controlling operability of the computing device, such as MDS appliance 1402, where these mechanisms include inter-process communication (IPC) logic 562, resource allocation logic 564, scheduling logic 566, interrupt delivery control logic 568 and kernel features 569.

2. Master Controller

Referring still to FIG. 5, similar in operation to master controller process 372 of FIG. 3, the master controller process 572 is responsible for enforcing policy rules directed to operations of the virtualization software architecture 150. For instance, the master controller component 572 may be configured to conduct a number of policy decisions, including some or all of the following: (1) memory allocation (e.g., distinct physical address space assigned to different software components); (2) execution time allotment (e.g., scheduling and duration of execution time allotted on a selected granular basis process basis); (3) virtual machine creation (e.g., number of VMs, OS type, etc.); (4) inter-process communications (e.g., which processes are permitted to communicate with which processes, etc.); and/or (5) coordination of VM operations and interactions between the components of the VMs 1701-170N and components within the host environment 180 for conducting threat detection.

3. Guest Monitor(S)

Referring still to FIG. 5, each guest monitor process 5761-576N is based on a user space application that is responsible for managing the execution of a corresponding virtual machine (e.g. first VM 1701), which is operating in concert with a corresponding threat protection process (e.g., threat protection process 5781 of threat protection processes 5781-578N). Herein, the threat protection process 5781 is configured to determine whether or not certain events, received by that particular guest monitor process (e.g., guest monitor process 5761) during processing of the object 335 within the VM 1701, are malicious.

In response an extended page table (EPT) violation, which causes the virtualization hardware to generate a VM Exit to the virtualization layer 185, the guest monitor process 5761 identifies that an unpermitted operation was attempted on a memory page associated with the nested page table. The occurrence of the VM Exit may prompt the guest monitor process 5761 to obtain and forward metadata associated with the EPT violation (as monitored by the guest agent process 175) to the threat protection process 5781. Based on the metadata, the threat protection component 5781 determines if the event was malicious or not.

If the event was benign, although the page is access protected, the guest monitor process 5761 may be responsible for emulating the attempted access. For instance, for an EPT violation triggered for a write-protection violation that is determined to be benign, the guest monitor process 5761 would need to simulate the write access. Alternatively, the guest monitor process 5761 could relax the page permissions (to their original permissions) and resume the VM 1701. Then, the write access would be restarted and there would be no EPT violation anymore.

4. Threat Protection Component

As described above and shown in FIG. 5, detection of a suspicious and/or malicious object 335 may be performed by static and dynamic analysis of the object 335 within the virtual machine 1701. Events associated with the process are monitored and stored by the guest agent process 175. Operating in concert with the guest agent process 175, the threat protection component 5781 is responsible for further malware detection associated with an object under analysis within the first VM 1701. This malware detection may invoke an analysis of events received from the guest agent process 175 running in the virtual machine 1701.

During analysis, the detected events are correlated and classified as benign (i.e., determination of the analyzed object 335 being malicious is less than a first level of probability); suspicious (i.e., determination of the analyzed object 335 being malicious is between the first level and a second level of probability); or malicious (i.e., determination of the analyzed object 335 being malicious is greater than the second level of probability). The correlation and classification operations may be accomplished by behavioral analysis logic and a classifier, similar to the operations described in FIG. 3.

Operating in concert with a guest agent process (not shown), which is operating within the ring-3 privileged area 526 of the second VM 1702, the threat protection process 5782 is responsible for the malware detection associated with the object 335 under analysis within the second VM 1702. The object 335 may be processed in the second VM 1702 concurrently with the processing of the object 335 within the first VM 1701. Alternatively, only one of the first VM 1701 or the second VM 1702 may process the object 335. The operations of the guest monitor process 5762 and the threat protection process 5782 are consistent with the operations of the guest monitor process 3761 and the threat protection process 3781 of FIG. 3

VII. Guest/Host Level Threat Protection Operability

Referring to FIG. 6, a flowchart of exemplary operations of the virtualization software architecture of a computing device with threat protection functionality deployed within the host environment is shown. First, an object is received by a virtual machine for processing to determine whether the object is associated with a malicious attack (block 600). The virtual machine is present in a guest environment of the virtualization. The object is subjected to static analysis and/or dynamic analysis.

In accordance with the static analysis, the object undergoes non-behavioral analysis of the object, which includes analysis of the content of the object in order to detect anomalous characteristics (block 610). Such analysis may involve the use of signatures to determine whether certain data patterns provided by the signature match content associated with the object, where a match indicates that the content is benign (e.g., match an entry in a white list), suspicious (e.g., no or partial match of an entry in a black list) or malicious (e.g., match of an entry in the black list). The static analysis may further involve the use of heuristics, where certain rules and/or policies are applied to the content of the object to detect anomalous characteristics of the object. Such detection may identify that the object is suspicious and further dynamic analysis (i.e., monitoring run-time behavior of the object) is warranted. This static analysis may produce static analysis results (e.g., identification of communication protocol anomalies and/or suspect source addresses of known malicious servers), which may be provided to the threat protection process located within the host environment (block 620).

Additionally or in the alternative, a dynamic analysis of the object may be conducted, where results of the dynamic analysis are provided to the threat protection process of the host environment (block 630). The dynamic analysis results may include events, namely captured, run-time behaviors of the object. The threat protection process analyzes both the static analysis results and/or the dynamic analysis results to determine a level of correlation with behavioral rules that define, among other things, patterns (e.g., sequences) of known malicious events (block 640). This level of correlation identifies a level of risk (or a numerical score) that may be used to arrive at a decision of maliciousness, where the evaluation of the level of correlation is conducted in a host environment that is different from the guest environment in which the virtual machine operates. The classifier utilizes the level of risk (or numerical score) to determine whether the object is malicious and, where malicious (block 650). It is contemplated that this separation between the virtual machine and the threat protection process is conducted to reduce the possibility of the threat protection process being compromised if the virtual machine becomes compromised.

Thereafter, the results of the analysis from the threat protection component are returned to the guest environment, namely the reporting module, for placement into a perceivable format that is provided to a user (or administrator) via transmissions via a network adapter or a display adapter (where display of the results are available to the computing device) as set forth in block 660.

Referring to FIG. 7, another flowchart of exemplary operations of the virtualization software architecture pertaining to a computing device with threat protection functionality deployed within the host environment is shown. First, one or more events produced during operations of the guest OS, such as the guest OS kernel, are monitored (block 700). Thereafter, the content associated with the monitored events is analyzed in order to detect anomalous characteristics (block 710). Such analysis may involve attempts to match content associated with the monitored events with data has already been determined to denote that the event is benign (e.g., match identifies expected content associated with the event) or malicious (e.g., match immediately identifies content that identifies anomalous behavior such as an unexpected call, unexpected disabling of a particular feature, or an anomalous memory access). The analysis may further involve the use of heuristics, where certain rules and/or policies are applied to the content of the event to detect anomalous characteristics. Such detection may identify that the event is suspicious (neither malicious nor benign) and further dynamic analysis (i.e., monitoring run-time behavior of the object) is warranted. Thereafter, results of the analysis may be provided to the threat protection process located within the host environment (block 720).

A secondary analysis of the event may be conducted, where an order or sequence of the event in connection with other monitored events may be analyzed and the results provided to the threat protection process of the host environment (block 730). Thereafter, the threat protection process analyzes both the first analysis results and/or the second analysis results to determine a level of correlation with behavioral rules that identifies a level of risk (or a numerical score) that may be used to arrive at a decision of maliciousness (block 740), where as before, the evaluation of the level of correlation is conducted in the host environment. The classifier utilizes the level of risk (or numerical score) to determine whether the object is malicious and, where malicious (block 750).

Thereafter, the results of the analysis from the threat protection component are returned to the guest environment, namely the reporting module, for placement into a perceivable format that is provided to a user (or administrator) via transmissions via a network adapter or a display adapter where display of the results are available to the computing device (block 760).

In the foregoing description, the invention is described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims.

Claims

1. A computing device comprising:

one or more hardware processors; and
a memory coupled to the one or more processors, the memory comprises one or more software components that, when executed by the one or more hardware processors, provide a virtualization software architecture including (i) a virtual machine, (ii) a plurality of hyper-processes and (iii) a hypervisor, wherein
the visual machine to operate in a guest environment and includes a process that is configured to monitor behaviors of data under analysis within the virtual machine,
the plurality of hyper-processes to operate in a host environment and isolated from each other within an address space of the memory, the plurality of hyper-processes include a threat protection process to classify the data under analysis as malicious or non-malicious based on the monitored behaviors and a guest monitor process configured to manage execution of the virtual machine and operate with the process to obtain and forward metadata associated with the monitored behaviors to the threat protection process, and
the hypervisor is configure to enforce temporal separation of the plurality of hyper-processes and enable inter-process communications between the plurality of hyper-processes.

2. The computing device of claim 1, wherein the process is a guest agent operating within the virtual machine and, when executed by the processor, monitors behaviors of the data under analysis that includes an object being processed by a guest application running in the guest environment.

3. The computing device of claim 1, wherein the process is a guest agent operating within a guest operating system (OS) of the virtual machine and, when executed by the processor, monitors behaviors of the data under analysis that includes one or more events based on operations by the guest OS during execution of the virtual machine.

4. The computing device of claim 3, wherein the one or more events are based on operations conducted by a guest OS kernel of the guest OS during processing of the data under analysis within the virtual machine.

5. The computing device of claim 1, wherein the threat protection process is further configured to determine whether the data under analysis is malicious or non-malicious completely outside the guest environment.

6. The computing device of claim 5, wherein the process is a guest agent operating within a guest operating system (OS) of the virtual machine and, when executed by the processor, monitors behaviors of the data under analysis that includes one or more events based on operations by the guest OS during execution of the virtual machine.

7. The computing device of claim 1, wherein the process is a guest agent operating within a guest operating system (OS) of the virtual machine and, when executed by the processor, communicates with the threat protection component to provide semantic information from inside the guest OS to the threat protection component.

8. The computing device of claim 7, wherein the semantic information from inside the guest OS is unavailable to the host environment, including the threat protection component, other than through the guest agent.

9. The computing device of claim 1, wherein the plurality of hyper-processes further includes a master controller process, the master controller process is configured to enforce policy rules directed to operations of the virtualization software architecture.

10. The computing device of claim 9, wherein a software component of the one or more software components comprises the hypervisor configured to enforce temporal separation of the plurality of hyper-processes.

11. The computing device of claim 1, wherein the plurality of hyper-processes operating in the host environment are based on code located in different binaries to isolate the plurality of hyper-processes from each other.

12. The computing device of claim 1, wherein the plurality of hyper-processes are isolated in which each hyper-process of the plurality of hyper-processes is running in its own separate address space.

13. The computing device of claim 1, wherein host environment including a master controller process, being a hyper-process operating separately from the plurality of hyper-processes.

14. The computing device of claim 1, wherein the hypervisor is configure to enforce temporal separation of all of the plurality of hyper-processes and enable inter-process communications between all of the plurality of hyper-processes.

15. The computing device of claim 1, wherein the plurality of hyper-processes operating in the host environment are isolated from each other when each of the plurality of hyper-processes are assigned different memory address spaces within the address space of the memory.

16. The computing device of claim 1, wherein the hypervisor enforces separation through a scheduling context, which is used for scheduling thread level activities within each of the plurality of hyper-processes.

17. The computing device of claim 16, wherein the scheduling context include a priority and a quantum time for execution of a thread within a protection domain associated with a first hyper-process of the plurality of hyper-processes.

18. A computerized method comprising:

configuring a virtualization software architecture with a guest environment and a host environment;
processing an object by a virtual machine operating in the guest environment, the virtual machine includes a process that monitors behaviors of the object during the processing of the object by the virtual machine;
classifying, by a plurality of hyper-processes operating in a host environment different from the guest environment, the object that undergoes processing by the virtual machine as malicious or non-malicious based at least on one or more of the monitored behaviors provided to a threat protection process being one of the plurality of hyper-processes; and
supporting inter-process communications between the plurality of hyper-processes by a hypervisor communicatively coupled to the plurality of hyper-processes,
wherein the plurality of hyper-processes include the threat protection process to classify the object as malicious or non-malicious based on the monitored behaviors and a guest monitor process configured to manage execution of the virtual machine and operate with the process to obtain and forward metadata associated with the monitored behaviors to the threat protection process that is isolated from the guest monitor process.

19. The computerized method of claim 18, wherein the process is a guest agent operating within the virtual machine and, when executed by a processor, monitors the behaviors of the object being processed by a guest application running in the virtual machine.

20. The computerized method of claim 18, wherein the process is a guest agent operating within a guest operating system (OS) of the virtual machine and, when executed by a processor, monitors behaviors of the object that includes one or more events based on operations by the guest OS during execution of the virtual machine.

21. The computerized method of claim 20, wherein the one or more events are based on operations conducted by a guest OS kernel of the guest OS during processing of the object within the virtual machine.

22. The computerized method of claim 18, wherein the threat protection process is further configured to determine whether the object is malicious or non-malicious completely outside the guest environment.

23. The computerized method of claim 18, wherein the process is a guest agent operating within a guest operating system (OS) of the virtual machine and, when executed by the processor, communicates with the threat protection process to provide semantic information from inside the guest OS to the threat protection process.

24. The computerized method of claim 23, wherein the semantic information from inside the guest OS is only made available to the threat protection process of the host environment from the guest agent.

25. The computerized method of claim 18, wherein the threat protection process is isolated from the guest monitor process in which code associated with threat protection process is stored within an address space used to store code associated with the guest monitor process.

26. A non-transitory storage medium including software that, when executed by a processor, configures a virtualization software architecture with a guest environment and a host environment including a hypervisor, the medium comprising:

a virtual machine operating in the guest environment, the virtual machine to process an object and monitor behaviors of the object during processing of the object;
a plurality of hyper-processes operating in a host environment different from the guest environment, the plurality of hyper-processes including a threat protection process to classify the object as malicious or non-malicious based at least on one or more of the monitored behaviors provided to the threat protection process; and
supporting inter-process communications between the plurality of hyper-processes by the hypervisor communicatively coupled to the plurality of hyper-processes,
wherein the plurality of hyper-processes include the threat protection process to classify the object as malicious or non-malicious based on the monitored behaviors and a guest monitor process configured to manage execution of the virtual machine and operate with the process to obtain and forward metadata associated with the monitored behaviors to the threat protection process that is isolated from the guest monitor process.

27. The non-transitory storage medium of claim 26, wherein a guest agent is operating within the virtual machine to monitor the behaviors of the object being processed by a guest application running in the virtual machine.

28. The non-transitory storage medium of claim 26, wherein a guest agent is operating within a guest operating system (OS) of the virtual machine and, when executed by the processor, monitors behaviors of the object that includes one or more events based on operations by the guest OS during execution of the virtual machine.

29. The non-transitory storage medium of claim 28, wherein the one or more events are based on operations conducted by a guest OS kernel of the guest OS during processing of the object within the virtual machine.

30. The non-transitory storage medium of claim 26, wherein the threat protection process is further configured to determine whether the object is malicious or non-malicious completely outside the guest environment.

31. The non-transitory storage medium of claim 26, wherein a guest agent is operating within a guest operating system (OS) of the virtual machine and, when executed by the processor, communicates with the threat protection process to provide semantic information from inside the guest OS to the threat protection process.

32. The non-transitory storage medium of claim 31, wherein the semantic information from inside the guest OS is only made available to the threat protection process of the host environment from the guest agent.

33. The computerized method of claim 18, wherein the threat protection process is isolated from the guest monitor process as the threat protection process is assigned a memory address space that is separate from and different than a memory address space assigned to the guest monitor process.

34. The non-transitory storage medium of claim 26, wherein the threat protection process is isolated from the guest monitor process as the threat protection process is assigned a memory address space that is separate from and different than a memory address space assigned to the guest monitor process.

Referenced Cited
U.S. Patent Documents
5878560 March 9, 1999 Johnson
6013455 January 11, 2000 Bandman et al.
7409719 August 5, 2008 Armstrong
7424745 September 9, 2008 Cheston et al.
7937387 May 3, 2011 Frazier et al.
7958558 June 7, 2011 Leake et al.
7996836 August 9, 2011 McCorkendale
8006305 August 23, 2011 Aziz
8010667 August 30, 2011 Zhang et al.
8069484 November 29, 2011 McMillan et al.
8151263 April 3, 2012 Venkitachalam et al.
8171553 May 1, 2012 Aziz et al.
8201169 June 12, 2012 Venkitachalam et al.
8204984 June 19, 2012 Aziz et al.
8233882 July 31, 2012 Rogel
8266395 September 11, 2012 Li
8271978 September 18, 2012 Bennett et al.
8290912 October 16, 2012 Searls et al.
8291499 October 16, 2012 Aziz et al.
8332571 December 11, 2012 Edwards, Sr.
8347380 January 1, 2013 Satish
8353031 January 8, 2013 Rajan et al.
8375444 February 12, 2013 Aziz et al.
8387046 February 26, 2013 Montague et al.
8397306 March 12, 2013 Tormasov
8418230 April 9, 2013 Cornelius et al.
8479276 July 2, 2013 Vaystikh et al.
8479294 July 2, 2013 Li et al.
8510827 August 13, 2013 Leake et al.
8516593 August 20, 2013 Aziz
8522236 August 27, 2013 Zimmer et al.
8528086 September 3, 2013 Aziz
8539582 September 17, 2013 Aziz et al.
8549638 October 1, 2013 Aziz
8561177 October 15, 2013 Aziz et al.
8566476 October 22, 2013 Shiffer et al.
8566946 October 22, 2013 Aziz et al.
8584239 November 12, 2013 Aziz et al.
8612659 December 17, 2013 Serebrin et al.
8635696 January 21, 2014 Aziz
8689333 April 1, 2014 Aziz
8713681 April 29, 2014 Silberman et al.
8756696 June 17, 2014 Miller
8775715 July 8, 2014 Tsirkin et al.
8776180 July 8, 2014 Kumar et al.
8776229 July 8, 2014 Aziz
8793278 July 29, 2014 Frazier et al.
8793787 July 29, 2014 Ismael et al.
8799997 August 5, 2014 Spiers et al.
8805947 August 12, 2014 Kuzkin et al.
8832352 September 9, 2014 Tsirkin et al.
8832829 September 9, 2014 Manni et al.
8839245 September 16, 2014 Khajuria et al.
8850060 September 30, 2014 Beloussov et al.
8850571 September 30, 2014 Staniford et al.
8863279 October 14, 2014 McDougal et al.
8875295 October 28, 2014 Lutas
8881271 November 4, 2014 Butler, II
8881282 November 4, 2014 Aziz et al.
8898788 November 25, 2014 Aziz et al.
8910238 December 9, 2014 Lukacs et al.
8935779 January 13, 2015 Manni et al.
8949257 February 3, 2015 Shifter et al.
8984478 March 17, 2015 Epstein
8984638 March 17, 2015 Aziz et al.
8990939 March 24, 2015 Staniford et al.
8990944 March 24, 2015 Singh et al.
8997219 March 31, 2015 Staniford et al.
9003402 April 7, 2015 Carbone et al.
9009822 April 14, 2015 Ismael et al.
9009823 April 14, 2015 Ismael et al.
9027125 May 5, 2015 Kumar et al.
9027135 May 5, 2015 Aziz
9071638 June 30, 2015 Aziz et al.
9087199 July 21, 2015 Sallam
9092616 July 28, 2015 Kumar et al.
9092625 July 28, 2015 Kashyap et al.
9104867 August 11, 2015 Thioux et al.
9106630 August 11, 2015 Frazier et al.
9106694 August 11, 2015 Aziz et al.
9117079 August 25, 2015 Huang et al.
9118715 August 25, 2015 Staniford et al.
9159035 October 13, 2015 Ismael et al.
9171160 October 27, 2015 Vincent et al.
9176843 November 3, 2015 Ismael et al.
9189627 November 17, 2015 Islam
9195829 November 24, 2015 Goradia et al.
9197664 November 24, 2015 Aziz et al.
9213651 December 15, 2015 Malyugin et al.
9223972 December 29, 2015 Vincent et al.
9225740 December 29, 2015 Ismael et al.
9241010 January 19, 2016 Bennett et al.
9251343 February 2, 2016 Vincent et al.
9262635 February 16, 2016 Paithane et al.
9268936 February 23, 2016 Butler
9275229 March 1, 2016 LeMasters
9282109 March 8, 2016 Aziz et al.
9292686 March 22, 2016 Ismael et al.
9294501 March 22, 2016 Mesdaq et al.
9300686 March 29, 2016 Pidathala et al.
9306960 April 5, 2016 Aziz
9306974 April 5, 2016 Aziz et al.
9311479 April 12, 2016 Manni et al.
9355247 May 31, 2016 Thioux et al.
9356944 May 31, 2016 Aziz
9363280 June 7, 2016 Rivlin et al.
9367681 June 14, 2016 Ismael et al.
9398028 July 19, 2016 Karandikar et al.
9413781 August 9, 2016 Cunningham et al.
9426071 August 23, 2016 Caldejon et al.
9430646 August 30, 2016 Mushtaq et al.
9432389 August 30, 2016 Khalid et al.
9438613 September 6, 2016 Paithane et al.
9438622 September 6, 2016 Staniford et al.
9438623 September 6, 2016 Thioux et al.
9459901 October 4, 2016 Jung et al.
9467460 October 11, 2016 Otvagin et al.
9483644 November 1, 2016 Paithane et al.
9495180 November 15, 2016 Ismael
9497213 November 15, 2016 Thompson et al.
9507935 November 29, 2016 Ismael et al.
9516057 December 6, 2016 Aziz
9519782 December 13, 2016 Aziz et al.
9536091 January 3, 2017 Paithane et al.
9537972 January 3, 2017 Edwards et al.
9560059 January 31, 2017 Islam
9563488 February 7, 2017 Fadel
9565202 February 7, 2017 Kindlund et al.
9591015 March 7, 2017 Amin et al.
9591020 March 7, 2017 Aziz
9594904 March 14, 2017 Jain et al.
9594905 March 14, 2017 Ismael et al.
9594912 March 14, 2017 Thioux et al.
9609007 March 28, 2017 Rivlin et al.
9626509 April 18, 2017 Khalid et al.
9628498 April 18, 2017 Aziz et al.
9628507 April 18, 2017 Haq et al.
9633134 April 25, 2017 Ross
9635039 April 25, 2017 Islam et al.
9641546 May 2, 2017 Manni et al.
9654485 May 16, 2017 Neumann
9661009 May 23, 2017 Karandikar et al.
9661018 May 23, 2017 Aziz
9674298 June 6, 2017 Edwards et al.
9680862 June 13, 2017 Ismael et al.
9690606 June 27, 2017 Ha et al.
9690933 June 27, 2017 Singh et al.
9690935 June 27, 2017 Shiffer et al.
9690936 June 27, 2017 Malik et al.
9736179 August 15, 2017 Ismael
9747446 August 29, 2017 Pidathala et al.
9756074 September 5, 2017 Aziz et al.
9773112 September 26, 2017 Rathor et al.
9781144 October 3, 2017 Otvagin et al.
9787700 October 10, 2017 Amin et al.
9787706 October 10, 2017 Otvagin et al.
9792196 October 17, 2017 Ismael et al.
9824209 November 21, 2017 Ismael et al.
9824211 November 21, 2017 Wilson
9824216 November 21, 2017 Khalid et al.
9825976 November 21, 2017 Gomez et al.
9825989 November 21, 2017 Mehra et al.
9838408 December 5, 2017 Karandikar et al.
9838411 December 5, 2017 Aziz
9838416 December 5, 2017 Aziz
9838417 December 5, 2017 Khalid et al.
9846776 December 19, 2017 Paithane et al.
9876701 January 23, 2018 Caldejon et al.
9888016 February 6, 2018 Amin et al.
9888019 February 6, 2018 Pidathala et al.
9910988 March 6, 2018 Vincent et al.
9912644 March 6, 2018 Cunningham
9912684 March 6, 2018 Aziz et al.
9912691 March 6, 2018 Mesdaq et al.
9912698 March 6, 2018 Thioux et al.
9916440 March 13, 2018 Paithane et al.
9921978 March 20, 2018 Chan et al.
10033759 July 24, 2018 Kabra et al.
10176095 January 8, 2019 Ferguson
10191858 January 29, 2019 Tsirkin
20020013802 January 31, 2002 Mori et al.
20040025016 February 5, 2004 Focke
20060075252 April 6, 2006 Kallahalla
20060112416 May 25, 2006 Ohta et al.
20060130060 June 15, 2006 Anderson et al.
20060236127 October 19, 2006 Kurien et al.
20060248528 November 2, 2006 Oney et al.
20070006226 January 4, 2007 Hendel
20070055837 March 8, 2007 Rajagopal
20070094676 April 26, 2007 Fresko
20070143565 June 21, 2007 Corrigan
20070250930 October 25, 2007 Aziz et al.
20070300227 December 27, 2007 Mall et al.
20080005782 January 3, 2008 Aziz
20080065854 March 13, 2008 Schoenberg et al.
20080123676 May 29, 2008 Cummings et al.
20080127348 May 29, 2008 Largman
20080184367 July 31, 2008 McMillan et al.
20080184373 July 31, 2008 Traut
20080222729 September 11, 2008 Chen
20080235793 September 25, 2008 Schunter et al.
20080244569 October 2, 2008 Challener et al.
20080294808 November 27, 2008 Mahalingam et al.
20080320594 December 25, 2008 Jiang
20090007100 January 1, 2009 Field et al.
20090036111 February 5, 2009 Danford et al.
20090044024 February 12, 2009 Oberheide et al.
20090044274 February 12, 2009 Budko
20090089860 April 2, 2009 Forrester et al.
20090089879 April 2, 2009 Wang
20090106754 April 23, 2009 Liu et al.
20090113425 April 30, 2009 Ports et al.
20090158432 June 18, 2009 Zheng et al.
20090172661 July 2, 2009 Zimmer et al.
20090198651 August 6, 2009 Shiffer et al.
20090198670 August 6, 2009 Shiffer et al.
20090198689 August 6, 2009 Frazier et al.
20090199274 August 6, 2009 Frazier et al.
20090204964 August 13, 2009 Foley
20090254990 October 8, 2009 McGee
20090276771 November 5, 2009 Nickolov et al.
20090320011 December 24, 2009 Chow et al.
20090328221 December 31, 2009 Blumfield et al.
20100030996 February 4, 2010 Butler, II
20100031360 February 4, 2010 Seshadri et al.
20100043073 February 18, 2010 Kuwamura
20100100718 April 22, 2010 Srinivasan
20100115621 May 6, 2010 Staniford et al.
20100191888 July 29, 2010 Serebrin et al.
20100192223 July 29, 2010 Ismael et al.
20100235647 September 16, 2010 Buer
20100254622 October 7, 2010 Kamay et al.
20100299665 November 25, 2010 Adams
20100306173 December 2, 2010 Frank
20100306560 December 2, 2010 Bozek et al.
20110004935 January 6, 2011 Mothe et al.
20110022695 January 27, 2011 Dalal et al.
20110047542 February 24, 2011 Dang et al.
20110047544 February 24, 2011 Yehuda et al.
20110060947 March 10, 2011 Song et al.
20110078794 March 31, 2011 Manni et al.
20110078797 March 31, 2011 Beachem et al.
20110082962 April 7, 2011 Horovitz et al.
20110093951 April 21, 2011 Aziz
20110099633 April 28, 2011 Aziz
20110099635 April 28, 2011 Silberman et al.
20110153909 June 23, 2011 Dong
20110167422 July 7, 2011 Eom et al.
20110173213 July 14, 2011 Frazier et al.
20110219450 September 8, 2011 McDougal et al.
20110225624 September 15, 2011 Sawhney et al.
20110247072 October 6, 2011 Staniford et al.
20110296412 December 1, 2011 Banga et al.
20110296440 December 1, 2011 Laurich et al.
20110299413 December 8, 2011 Chatwani et al.
20110314546 December 22, 2011 Aziz et al.
20110321040 December 29, 2011 Sobel et al.
20110321165 December 29, 2011 Capalik et al.
20110321166 December 29, 2011 Capalik et al.
20120011508 January 12, 2012 Ahmad
20120047576 February 23, 2012 Do et al.
20120117652 May 10, 2012 Manni et al.
20120131156 May 24, 2012 Brandt et al.
20120144489 June 7, 2012 Jarrett et al.
20120159454 June 21, 2012 Barham et al.
20120174186 July 5, 2012 Aziz et al.
20120174218 July 5, 2012 McCoy et al.
20120198514 August 2, 2012 McCune et al.
20120216046 August 23, 2012 McDougal et al.
20120216069 August 23, 2012 Bensinger
20120222114 August 30, 2012 Shanbhogue
20120222121 August 30, 2012 Staniford et al.
20120254993 October 4, 2012 Sallam
20120254995 October 4, 2012 Sallam
20120255002 October 4, 2012 Sallam
20120255003 October 4, 2012 Sallam
20120255012 October 4, 2012 Sallam
20120255015 October 4, 2012 Sahita et al.
20120255016 October 4, 2012 Sallam
20120255017 October 4, 2012 Sallam
20120255021 October 4, 2012 Sallam
20120260304 October 11, 2012 Morris
20120260345 October 11, 2012 Quinn et al.
20120265976 October 18, 2012 Spiers et al.
20120291029 November 15, 2012 Kidambi et al.
20120297057 November 22, 2012 Ghosh et al.
20120311708 December 6, 2012 Agarwal et al.
20120317566 December 13, 2012 Santos et al.
20120331553 December 27, 2012 Aziz et al.
20130007325 January 3, 2013 Sahita et al.
20130036470 February 7, 2013 Zhu et al.
20130036472 February 7, 2013 Aziz
20130047257 February 21, 2013 Aziz
20130055256 February 28, 2013 Banga et al.
20130086235 April 4, 2013 Ferris
20130086299 April 4, 2013 Epstein
20130091571 April 11, 2013 Lu
20130111593 May 2, 2013 Shankar et al.
20130117741 May 9, 2013 Prabhakaran et al.
20130117848 May 9, 2013 Golshan et al.
20130117849 May 9, 2013 Golshan et al.
20130159662 June 20, 2013 Iyigun et al.
20130179971 July 11, 2013 Harrison
20130191924 July 25, 2013 Tedesco et al.
20130227680 August 29, 2013 Pavlyushchik
20130227691 August 29, 2013 Aziz et al.
20130247186 September 19, 2013 LeMasters
20130282776 October 24, 2013 Durrant et al.
20130283370 October 24, 2013 Vipat et al.
20130291109 October 31, 2013 Staniford et al.
20130298243 November 7, 2013 Kumar et al.
20130298244 November 7, 2013 Kumar et al.
20130312098 November 21, 2013 Kapoor et al.
20130312099 November 21, 2013 Edwards et al.
20130318038 November 28, 2013 Shiffer et al.
20130318073 November 28, 2013 Shiffer et al.
20130325791 December 5, 2013 Shiffer et al.
20130325792 December 5, 2013 Shiffer et al.
20130325871 December 5, 2013 Shiffer et al.
20130325872 December 5, 2013 Shiffer et al.
20130326625 December 5, 2013 Anderson et al.
20130333033 December 12, 2013 Khesin
20130333040 December 12, 2013 Diehl et al.
20130347131 December 26, 2013 Mooring
20140006734 January 2, 2014 Li et al.
20140019963 January 16, 2014 Deng et al.
20140032875 January 30, 2014 Butler
20140075522 March 13, 2014 Paris et al.
20140089266 March 27, 2014 Une et al.
20140096134 April 3, 2014 Barak et al.
20140115578 April 24, 2014 Cooper et al.
20140115652 April 24, 2014 Kapoor et al.
20140130158 May 8, 2014 Wang et al.
20140137180 May 15, 2014 Lukacs et al.
20140157407 June 5, 2014 Krishnan et al.
20140181131 June 26, 2014 Ross
20140189687 July 3, 2014 Jung et al.
20140189866 July 3, 2014 Shiffer et al.
20140189882 July 3, 2014 Jung et al.
20140208123 July 24, 2014 Roth et al.
20140230024 August 14, 2014 Uehara et al.
20140237600 August 21, 2014 Silberman et al.
20140245423 August 28, 2014 Lee
20140259169 September 11, 2014 Harrison
20140280245 September 18, 2014 Wilson
20140283037 September 18, 2014 Sikorski et al.
20140283063 September 18, 2014 Thompson et al.
20140289105 September 25, 2014 Sirota et al.
20140304819 October 9, 2014 Ignatchenko et al.
20140310810 October 16, 2014 Brueckner et al.
20140325644 October 30, 2014 Oberg et al.
20140337836 November 13, 2014 Ismael
20140344926 November 20, 2014 Cunningham et al.
20140351810 November 27, 2014 Pratt et al.
20140359239 December 4, 2014 Hiremane et al.
20140380473 December 25, 2014 Bu et al.
20140380474 December 25, 2014 Paithane et al.
20150007312 January 1, 2015 Pidathala et al.
20150013008 January 8, 2015 Lukacs et al.
20150095661 April 2, 2015 Sell
20150096022 April 2, 2015 Vincent et al.
20150096023 April 2, 2015 Mesdaq et al.
20150096024 April 2, 2015 Haq et al.
20150096025 April 2, 2015 Ismael
20150121135 April 30, 2015 Pape
20150128266 May 7, 2015 Tosa
20150172300 June 18, 2015 Cochenour
20150180886 June 25, 2015 Staniford et al.
20150186645 July 2, 2015 Aziz et al.
20150199514 July 16, 2015 Tosa et al.
20150199531 July 16, 2015 Ismael et al.
20150199532 July 16, 2015 Ismael et al.
20150220735 August 6, 2015 Paithane et al.
20150244732 August 27, 2015 Golshan et al.
20150304716 October 22, 2015 Sanchez-Leighton
20150317495 November 5, 2015 Rodgers et al.
20150318986 November 5, 2015 Novak
20150372980 December 24, 2015 Eyada
20160004869 January 7, 2016 Ismael et al.
20160006756 January 7, 2016 Ismael et al.
20160044000 February 11, 2016 Cunningham
20160048680 February 18, 2016 Lutas et al.
20160057123 February 25, 2016 Jiang et al.
20160127393 May 5, 2016 Aziz et al.
20160191547 June 30, 2016 Zafar
20160191550 June 30, 2016 Ismael et al.
20160261612 September 8, 2016 Mesdaq et al.
20160285914 September 29, 2016 Singh et al.
20160301703 October 13, 2016 Aziz
20160335110 November 17, 2016 Paithane et al.
20160371105 December 22, 2016 Sieffert et al.
20170083703 March 23, 2017 Abbasi et al.
20170124326 May 4, 2017 Wailly
20170213030 July 27, 2017 Mooring et al.
20170344496 November 30, 2017 Chen et al.
20170364677 December 21, 2017 Soman et al.
20180013770 January 11, 2018 Ismael
20180048660 February 15, 2018 Paithane et al.
Foreign Patent Documents
2011/112348 September 2011 WO
2012135192 October 2012 WO
2012154664 November 2012 WO
2012177464 December 2012 WO
2013/067505 May 2013 WO
2013091221 June 2013 WO
2014004747 January 2014 WO
Other references
  • U.S. Appl. No. 15/199,873, filed Jun. 30, 2016 Non-Final Office Action dated Feb. 9, 2018.
  • U.S. Appl. No. 15/199,876, filed Jun. 30, 2016 Non-Final Office Action dated Jan. 10, 2018.
  • U.S. Appl. No. 15/197,634, filed Jun. 29, 2016 Notice of Allowance dated Apr. 18, 2018.
  • U.S. Appl. No. 15/199,873, filed Jun. 30, 2016 Final Office Action dated Sep. 10, 2018.
  • U.S. Appl. No. 15/199,876, filed Jun. 30, 2016 Final Office Action dated Jul. 5, 2018.
  • U.S. Appl. No. 15/199,882, filed Jun. 30, 2016 Advisory Action dated Nov. 8, 2018.
  • U.S. Appl. No. 15/199,882, filed Jun. 30, 2016 Final Office Action dated Aug. 31, 2018.
  • U.S. Appl. No. 15/199,882, filed Jun. 30, 2016 Non-Final Office Action dated Apr. 5, 2018.
  • U.S. Appl. No. 15/199,882, filed Jun. 30, 2016 Non-Final Office Action dated Dec. 20, 2018.
  • U.S. Appl. No. 15/199,871, filed Jun. 30, 2016.
  • U.S. Appl. No. 15/199,873, filed Jun. 30, 2016.
  • U.S. Appl. No. 15/199,876, filed Jun. 30, 2016.
  • U.S. Appl. No. 15/199,882, filed Jun. 30, 2016.
Patent History
Patent number: 10395029
Type: Grant
Filed: Jun 30, 2016
Date of Patent: Aug 27, 2019
Assignee: FireEye, Inc. (Milpitas, CA)
Inventor: Udo Steinberg (Braunschweig)
Primary Examiner: Piotr Poltorak
Application Number: 15/199,871
Classifications
Current U.S. Class: Digital Data Processing System Initialization Or Configuration (e.g., Initializing, Set Up, Configuration, Or Resetting) (713/1)
International Classification: G06F 21/55 (20130101); G06F 21/56 (20130101); G06F 9/455 (20180101);