Method and Apparatus for Network Fraud Detection and Remediation Through Analytics
A system and method for assessing the identity fraud risk of an entity's (a user's, computer process's, or device's) behavior within a computer network and then to take appropriate action. The system uses real-time machine learning for its assessment. It records the entity's log-in behavior (conditions at log-in) and behavior once logged in to create an entity profile that helps identify behavior patterns. The system compares new entity behavior with the entity profile to determine a risk score and a confidence level for the behavior. If the risk score and confidence level indicate a credible identity fraud risk at log-in, the system can require more factors of authentication before log-in succeeds. If the system detects risky behavior after log-in, it can take remedial action such as ending the entity's session, curtailing the entity's privileges, or notifying a human administrator.
Authentication and authorization are critical to the security of any computer network. Authentication verifies the identity of an entity (person, process, or device) that wants to access a network and its devices and services. Authorization determines what privileges that an entity, once authenticated, has on a network during the entity's session, which lasts from log-on through log-off. Some privileged entities may access all resources, while other entities are limited to resources where they may do little to no harm. Without successful authentication and authorization, a network and its resources are open to fraudulent intrusion.
Authentication requires a method to identify and verify an entity that is requesting access. There is a wide variety of methods to do this that range from a simple username/password combination to smart cards, fingerprint readers, retinal scans, passcode generators, and other techniques. Multifactor authentication combines two or more of these authentication methods for added security. All of these methods have evolved to meet the constant challenge of defeating increasingly sophisticated unauthorized access.
Authorization occurs after authentication, typically through a network directory service or identity management service that lists privileges for each entity that may log into the network. The privileges define what network resources—services, computers, and other devices—the logged-in entity may access. When an entity requests to use a network resource, the resource checks the entity's privileges and then either grants or denies access accordingly. If an entity does not have access to a resource, the network may allow the entity to log in again, providing a new identity and verification to gain additional privileges in a new session.
Problems with Prior ArtAs authentication methods increase in effectiveness, they often get harder to use and may still have weaknesses that allow unauthorized access.
The standard authentication method of username and password is very easy to use—it requires only the simple text entry of two values—but it is also not very secure. It is often easy for an identity thief to guess or steal the username/password combination.
Requiring smart card insertion is more secure than a username/password combination, as is biometric authentication such as reading a fingerprint or scanning a retina. But these methods require special readers or scanners plugged into a log-in device that can be expensive or inconvenient to attach. They can also be defeated by a determined identity thief who can steal a card or copy a fingerprint or retina pattern. These authentication methods have an added disadvantage: they do not work for non-human entities such as a computer or a service.
Multifactor AuthenticationMultifactor authentication (MFA) increases security by requiring two or more different authentication methods such as a user/password combination followed by a smart-card insertion or a network request to the user's cell phone to confirm authentication. As the number of factors increases, authentication security increases exponentially.
That said, MFA is not foolproof. It is still possible to steal or phish for data and devices necessary to satisfy each factor. And MFA can be difficult to use for anyone trying to log into a network. Each added factor takes added time and effort before a session starts. It is especially time-consuming when a user has to find one or more devices like a cell phone that may require its own authentication and then confirmation for network authentication. If a user has to perform multiple authentications within a session (to gain extra authorization or use special services, for example), it compounds MFA's difficulty.
Adaptive MFAAdaptive MFA is one solution to the difficulty of using MFA. A network with adaptive MFA can change its authentication requirements depending on detected conditions at log-in. If the conditions indicate a secure environment, the network can require minimal authentication factors to make authentication simple for the entity. If conditions indicate a possible security threat, the network can require additional factors.
For example, if an entity is logging in from a known IP address with a single correct attempt at a username/password combination, the network may require no more authentication than that. If the entity is logging in from an unknown IP address with multiple unsuccessful attempts at a username/password combination before getting it right, the network might require a smart card in addition before authentication is successful.
Adaptive MFA is rule-based, though, which limits its effectiveness because those rules are static. Adaptive MFA can check conditions at log-in and change MFA requirements based on preset rules that grant access, but it is ignorant of an entity's history—its past behavior and whether or not it has been usual or unusual before the current login—and wouldn't know how to act on that history even if known.
For example, if a user works a night shift and typically logs in from midnight to 8 a.m., preset rules might require additional authentication factors if login occurs outside the business hours of 9 a.m. to 5 p.m. The night-time user would always have to provide extra authentication even though his behavior is normal and predictable and there is no extra risk to his login.
Malicious Behavior within a Session
Once an entity has logged into a network, it can use any network resources for which it has privileges. If a malignant entity is fraudulently logged in, the entity may severely compromise confidential data, change network settings to aid further unauthorized intrusions, install and run malicious software, and cause significant damage to network resources.
This kind of malicious damage often remains undetected for long periods of time—entity behavior is usually unmonitored within a network. If behavior is monitored by security processes, those processes may follow rules to help deter damaging actions such as requiring further authentication, but the processes cannot judge whether the entity's overall actions are suspicious or not and cannot take remedial action.
Network security processes may notify a human administrator of a suspicious action such as an attempt to access extremely confidential information, but by the time a human can look into the entity's actions, it may be too late to take remedial action, especially if the entity is a computer process that acts swiftly. In many cases, malicious behavior remains undetected until a human administrator notices changes or until damage becomes so significant that it becomes readily apparent. By that time it is usually too late to take any kind of remedial action: the damage is done.
Machine Learning for Threat AnalysisSome network security systems may employ machine learning to analyze potential threats to a network. The machine learning may establish a risk assessment model that determines which entity events (such as log-in events) may pose a threat and which may not. An entity event or simply event is defined as any activity carried out by a user, device, or process that is detected and reported by network monitoring mechanisms. In addition to log-in events, other entity events detected and reported by network monitoring mechanisms include but are not limited to starting or ending applications, making requests of running applications, reading or writing files, changing an entity's authorization, monitoring network traffic, and logging out.
These systems typically work using historical event data collected after a user or a set of users gives permission for those events to be analyzed. The systems do not revise their risk assessment model with new events as they happen, so the model may be out-of-date and ineffective. The system may establish, for example, that logins from a specific location pose a threat, but if the network incorporates a branch office at that location, the location may no longer be enough to pose a threat. The risk assessment model may not be updated in time to avoid a number of incorrectly evaluated login attempts.
Current machine learning systems may also require human supervision where humans evaluate and annotate a set of events before giving them to the machine learning system to “teach” the system which types of events are good and which are bad. This takes time and effort, and further ensures that the system's evaluation parameters will be out-of-date.
Current machine learning systems set up a risk assessment model for each event parameter (login time, for example, or location, or number of login attempts), evaluate each parameter of an event, and then aggregate the evaluations to determine total risk. Because the risk assessment does not consider an event's parameters in combination, it misses parameter combinations that might raise alarms even though the individual parameters may not seem risky—for example, a login time that does not seem risky in itself from a login location that also does not seem risky, but taken together raise alarms because logins do not typically occur at that time in that location.
Current machine learning systems typically apply their risk assessment model to a single network function, usually authentication. The model is not easily adapted to detect threats to other functions such as user activity within the network. Reworking the machine learning system to apply to other functions takes human administrative work that may prevent applying the system to those other functions.
SUMMARY OF THE INVENTIONEmbodiments of this invention monitor and store in real time entity events such as network logins and significant user activities after login such as application and device use. An embodiment uses those events to learn standard entity behavior without human assistance, to detect possible risky behavior, and to take remedial action to prevent access or limit activity by a suspicious entity. Because embodiments do not require human assistance, they are much easier to use than prior art.
Each login event stored by an embodiment of the invention includes the conditions when the event occurred such as the device used to log in, location, date and time, and so on. Events after login may include the type of event, date and time, and any other data that might be pertinent to the event.
To analyze an entity's events, an embodiment of the invention builds an entity profile using an entity's live event stream and looks for patterns in the events to determine normal behavior patterns for the entity. An entity profile is a collection of an entity's past events graphed by their parameters in a multi-dimensional array as described below. Each time an embodiment of the invention notes a new entity event, it compares the event to the entity's behavior history as recorded in its entity profile to determine aberrant behavior and therefore increased risk.
When an embodiment of the invention compares an event to behavior history, the embodiment simultaneously considers multiple event parameters which gives it more accuracy in determining aberrant behavior than prior art that considers a single parameter at a pass. An embodiment also uses each new entity event to keep the entity profile up-to-date, an improvement over prior art where entity events are evaluated as historical data at infrequent intervals.
If an embodiment of the invention determines increased risk during or just prior to login (a new location, multiple password attempts, and/or an unusual time of day, for example), it can change login to require more authentication factors. It could, for example, normally require just a username and password for login, but if a login attempt looks risky based on behavior, it could also require using a smart card or even more for particularly risky attempts. If a login attempt looks unusually risky, it could even notify a human administrator.
If an embodiment of the invention notices unusual behavior after login, such as using unusual applications or accessing sensitive devices, an embodiment of the invention can take remedial action. It might, for example, terminate the entity's session, close access to certain resources, change the entity's authorization to limit it to a safe subset of privileges, or contact a human administrator. This is an improvement over prior art that focuses on policing a single type of entity activity, typically login.
An embodiment of the invention defines a set of rules that an administrator can easily adjust to set sensitivity to aberrant behavior and an embodiment's assessment of what it considers risky behavior. The administrator can also easily set the way an embodiment of the invention behaves when it detects risk—how it changes login requirements or authorization in risky situations, for example, or how it handles risky behavior within the network.
Prior Art SecurityAn access control service 11 authenticates entities and can change authentication factors at login and at other authentication events according to preset rules within the service. It checks entity authentication and authorization with a directory service 13, such as Active Directory®, that defines authentication requirements and authorization for each entity.
The access control service 11 reports login attempts to an event reporting agent 15. The agent 15 may collect other events within the network as described later. The agent 15 reports its collected events to an event logging service 17 that stores the details of the events for later retrieval.
A human administrator may use an admin web browser 19 as a console to manage and read data from the event logging service 17, the event reporting agent 15, the access control service 11, and the directory service 13.
Embodiments of the invention are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings in which like references indicate similar elements. Note that references to “an” or “one” embodiment in this disclosure are not necessarily to the same embodiment, and such references mean “at least one.”
An embodiment of the invention operates within a computer network, web portal, or other computing environment that requires authentication and authorization to use the environment's resources.
Assisting Non-Invention Components 21An event reporting agent 15 detects entity behavior and reports it to an embodiment of the invention as events, each event with a set of parameters. The event reporting agent 15 can be part of a larger identity service such as a commercially available product known as Centrify Server Suite® available from Centrify Corporation. Entity events typically come from an access control service 11 and can include:
a) Login events, which can include parameters such as the IP address of the device used, the type of device used, physical location, number of login attempts, date and time, and more.
b) Application access events, which can specify what application is used, application type, date and time of use, and more.
c) Privileged resource events such as launching an ssh session or an RDP session as an administrator.
d) Mobile device management events such as enrolling or un-enrolling a mobile device with an identity management service.
e) CLI command-use events such as UNIX commands or MS-DOS commands, which can specify the commands used, date and time of use, and more.
f) Authorization escalation events, such as logging in as a super-user in a UNIX environment, which can specify login parameters listed above.
g) Risk-based access feedback events, which report an embodiment of the invention's evaluations of the entity. For example, when the access control service 11 requests a risk evaluation from an embodiment of the invention at entity log-in, the action generates an event that contains the resulting evaluation and any resulting action based on the evaluation.
An access control service 11 authenticates entities and can change authentication factor requirements at login and at other authentication events. The access control service may be part of a larger identity service such as the Centrify Server Suite®.
A directory service 13 such as Active Directory® defines authentication requirements and authorization for each entity. The directory service may be part of a larger identity service such as the Centrify Server Suite®.
An admin web browser 19 that an administrator can use to control an embodiment of the invention.
Embodiment Components 22An embodiment of the invention has five primary components. Four of these components reside in the embodiment's core 23 where they have secure access to each other:
The event ingestion service 25 accepts event data from the event reporting agent 15, filters out events that are malformed or irrelevant, deletes unnecessary event data, and converts event data into values that the risk assessment engine 27 can use.
The risk assessment engine 27 accepts entity events from the event ingestion service 25 and uses them to build an entity profile for each entity. Whenever requested, the risk assessment engine 27 can compare an event or attempted event to the entity's profile to determine a threat level for the event.
The streaming threat remediation engine 29 accepts a steady stream of events from the risk assessment engine 27. The streaming threat remediation engine 29 stores a rule queue. Each rule in the queue tests an incoming event and may take action if the rule detects certain conditions in the event. A rule may, for example, check the event type, contact the risk assessment engine 27 to determine risk for the event and, if fraud risk is high, require additional login or terminate an entity's session.
The risk assessment service 31 is a front end for the risk assessment engine 27. The service 31 allows components outside the embodiment core 23 to make authenticated connections to embodiment core components and then request service from the risk assessment engine 27. Service is typically something such as assessing risk for a provided event or for an attempted event such as log-in.
An embodiment of the invention has a fifth component that resides outside the embodiment core 23 where non-invention components 21 may easily access it:
The on-demand threat remediation engine 33 is very similar to the streaming threat remediation engine 29. It contains a rule queue. The rules here, though, test attempted events such as log-in requests or authorization changes that may require threat assessment before the requests are granted and the event takes place. An outside component such as the access control service 11 may contact the engine 33 with an attempted event. The engine 33 can request risk assessment from an embodiment of the invention through the risk assessment service 31.
The Event Ingestion Service 25The event ingestion service 25 receives event data from the event reporting agent 15 through any of a variety of methods. It might, for example, subscribe to an event-reporting service maintained by the event reporting agent or query through an event-reporting API.
The event reporting agent 15 typically reports some events that aren't of interest for entity risk analysis. They may be invalid events, for example, that have a missing time stamp, have a missing or wrong version number, or may not be reported from a valid data collection event reporting agent 15. Some event types may not be useful for entity behavior analysis—non-entity events such as cloud status reports, or entity events that report behavior not currently used for risk analysis such as a financial billing event. The event ingestion service 25 is set to recognize these events and filter them out.
The event reporting agent 15 may also report useful events that may be in a format that is not usable by the risk assessment engine 27. The data in the event may be in text format, for example, or it may include information that has nothing to do with risk analysis. The event ingestion service 25 removes unusable data, converts data into values usable by the risk assessment engine 27, and passes the converted events on to the risk assessment engine 27.
To convert event data into values usable by the risk assessment engine 27, the event ingestion engine 25 looks for applicable event attributes within the event. Some of those attributes have numerical values, others have categorical values such as device type. The event ingestion engine 25 uses well-known statistical techniques such as one-hot conversion and binary conversion to convert categorical values into numerical values. The event ingestion engine 25 then scales and normalizes numerical values using well-known statistical techniques so that the values fall within a consistent and centered range when plotted in an array.
The Risk Assessment Engine 27The risk assessment engine 27 receives a stream of entity events from the event ingestion service 25. The engine 27 uses well-known unsupervised real-time machine learning techniques to build an entity profile for each entity with reported events and then, when requested, to determine unusual behavior on the part of that entity.
To build an entity profile, the risk assessment engine 27 plots each of an entity's events on a multi-dimensional array 83. The array 83 has an axis 85 for each type of event parameter. It could, for example, be a seven-dimensional array 83 with an axis 85, each axis for an event's date, time, location latitude, location longitude, device type, IP address, and number of log-in attempts. The array 83 in practice may have many more dimensions to record other event parameter types. In
An entity event's parameters are numerical values that represent the character of the parameter—the number of seconds past midnight for time of day, for example. The engine 27 plots the event location 91 in the entity profile array 83 using those parameter values. As the events accumulate in the array 83, clusters 93 of events with similar parameter values appear. Those clusters 93 represent typical behavior for the entity. In
The risk assessment engine 27 detects those clusters 93. When another component (typically one of the two remediation engines 29 and 33) requests risk assessment for an event, the engine 27 checks the event location's 91 proximity to existing clusters 93. If an event is too far from a cluster, it is an anomaly 95 because its parameters show unusual behavior by the entity.
The risk assessment engine 27 assigns a risk score and confidence score for an assessed event. The risk score is based on the event location's 91 distance from existing clusters 93—the further the distance, the higher the risk score. The confidence score is based on the number of events recorded in an entity's profile and the length of time over which the events have been reported—more events over a greater number of days provides more confidence because there is more data to analyze and a greater chance to detect behavior patterns that vary over time. Fewer events over a shorter number of days provides less confidence in behavior analysis.
The risk assessment engine 27 may use the risk and confidence scores to assign one of five fraud risk levels to the assessed event:
a) Unknown: there are not enough events in the entity profile over a long enough period of time to successfully determine fraud risk.
b) Normal: the event looks legitimate.
c) Low Risk: some aspects of the event are abnormal, but not many.
d) Medium Risk: some important aspects of the event are abnormal, some are not.
e) High Risk: many key aspects of the event are abnormal.
The risk assessment engine 27 can decay clusters 93 in the entity profile—that is, give older clusters 93 less weight in analysis and possibly remove them entirely if they get too old. This helps the accuracy of behavior analysis by accommodating changing entity behavior over time. For example, a user might move to a new country where his location, IP address, and other behavior parameters change. After long enough in the new country, new event clusters 93 appear in the user's profile while old clusters 93 fade and eventually disappear.
The risk assessment engine 27 can return an event's risk score, confidence score, and fraud risk level to the requester, which can take action if appropriate.
An administrator can control the risk assessment engine's 27 behavior through a console that is typically provided through a web browser 19 connected to the engine 27 or another part of an embodiment of the invention connected to the engine. The administrator can adjust behavior such as anomaly 95 detection, risk and confidence score assignment, and event decay time. The risk assessment engine 27 is also capable of adjusting itself as it learns more effective analysis techniques with repeated exposure to events.
The Streaming Threat Remediation Engine 29The streaming threat remediation engine 29 accepts the stream of events that came from the event ingestion service 25 and passed through the risk assessment engine 27. The remediation engine 29 runs each event through a rule queue. Each rule is a piece of code that executes to test the event's attributes such as event type, time of execution, and others. A rule can request risk assessment of the event from the risk assessment engine 27 as an additional event attribute.
Depending on the results of the event's property tests (which can include testing the risk assessment attribute), a rule can take action or not. That action might be to execute an associated script. The script can work with other network components such as the access control service 11 or the directory service 13 to take remedial action for an event with assessed fraud risk. The script might log an entity out, for example, or change the entity's authorization level. The rule's action might also be to jump to another rule in the queue.
If a rule takes no action, the event passes to the next rule in the queue. Most events pass completely through the rule queue without triggering any action.
An administrator can control the streaming threat remediation engine 29 through the engine's console on a web browser 19 or through other interfaces such as an API. The administrator may create rules, reorganize the rule queue, associate scripts to carry out remedial actions, and perform other administrative actions.
The Risk Assessment Service 31Components outside the embodiment core 23—on another server, for example—cannot directly request risk assessment from the risk assessment engine 27. Outside access to the risk assessment engine 27 is important, though, for assessing attempted events such as log-in requests that are not yet granted and have not yet become internally processed events.
The risk assessment service 31 provides a front end for the risk assessment engine 27: it provides a contact point where external components can authenticate, establish a secure connection, and then request attempted event risk assessment from the risk assessment engine 27. The risk assessment service 31 converts data in the supplied attempted event into values that the risk assessment engine 27 can use. The risk assessment service 31 converts data in the same way that the event ingestion engine 25 converts streaming event data into values usable by the risk assessment engine 27. After data conversion, the risk assessment service 31 passes attempted event risk evaluation requests on to the risk assessment engine 27 and returns the results to the requester.
The On-Demand Threat Remediation Engine 33The on-demand threat remediation engine 33 is similar to the streaming threat remediation engine 29. It contains a rule queue that tests and carries out conditional actions that may include executing scripts. It has two principal differences, though: it resides outside the embodiment core 23 so that it is easily accessible to external components, and it handles attempted events (events that are waiting permission to execute) rather than already-executed events.
Attempted events are typically authentication requests such as log-in requests that come in through the access control service 11. The request must wait for access control service 11 approval before it is granted and the log-in becomes an executed event. While the request is pending, the access control service 11 can contact the on-demand threat remediation engine 33 with the attempted event, which includes pertinent properties such as the time and place of the request, the originating device for the request, and so on.
The on-demand threat remediation engine 33 runs an attempted event through its rule queue just as the streaming threat remediation engine 29 runs an executed event through its rule queue. The rules in the queue test the attempted events properties and may request risk assessment for some attempted events.
The on-demand threat remediation engine 33 contacts the risk assessment service 31 when it requests risk assessment. The the risk assessment service 31 passes the attempted event in a form that the risk assessment engine 27 can treat as an executed event. When the risk assessment service 31 passes the attempted event on to the risk assessment engine 27, the assessment engine 27 compares the attempted event's event location 91 with clusters 93 in the requesting entity's profile just as it would an executed event to determine risk and confidence scores and fraud risk. The assessment returns to the risk assessment service 31 and then back to the on-demand threat remediation engine 33. The rule that triggered the assessment may then take action such as denying log-in through the access control service 11 if the attempted log-in's threat level is too high.
Handling an Executed EventThe user 35, who is logged into a web portal that incorporates the invention, starts 37 an administration application that can be used to examine other users' email. The user is a fraudulent user in a suspect location who is active during a time when the user account is not normally used.
The event reporting agent 15 reports 39 the application start event to the event ingestion service 25. The event contains among other things the application type, the user's location, and the date and time of the application start.
The event ingestion service 25 filters and converts 41 the event: the service 25 ensures that the event is not an extraneous event that shouldn't be analyzed, then converts the data in the event into a form that the risk assessment engine 27 can use.
The event ingestion service 25 sends 43 the converted event to the risk assessment engine 27.
The risk assessment engine 27 adds 45 the event to the user's entity profile, where the event's event location 91 is plotted in the profile's multiple-dimensional array 83.
The risk assessment engine 27 sends 47 the event to the streaming threat remediation engine 29.
The streaming threat remediation engine 29 runs 49 the event through the engine's 29 rule chain.
In the streaming threat remediation engine 29, the application type triggers 51 a request for risk and confidence scores for the event. It does so because one of the rules in the rule chain tests for application type, notices a high-risk application, and requests risk and confidence scores from the risk assessment engine 27 for the event.
The risk assessment engine 27 compares 53 the event location 91 to clusters 93 in the entity's profile and notices that the location and time are an aberration 95 because they are not usual for the user. The engine 27 calculates a high risk score because of that. Because (in this example) there are many events in the profile, the engine calculates a high confidence score.
The risk assessment engine 27 returns the high risk and confidence scores to the streaming threat remediation engine 29.
In the streaming threat remediation engine 29, the high scores trigger 57 a script that requests user 35 disconnection from the access control service 11.
The access control service 11 disconnects 59 the user 35.
Handling an Attempted EventThe user 35, a hacker from a suspicious location at an unusual time who is not the real user, requests 61 log-in to a web portal that incorporates the invention. The log-in request goes to the portal's access control service 11.
The access control service 11 sends 63 the access attempt with the attempt parameters (including request location and date and time) to the on-demand threat remediation engine 33.
The on-demand threat remediation engine 33 runs 65 the access attempt event through the engine's 33 rule queue.
In the on-demand threat remediation engine 33, the event triggers a rule that recognizes that the event attempts access, which requires risk assessment, so the engine 33 requests 67 risk assessment of the access attempt from the risk assessment service 31.
The risk assessment service 31 converts 68 the data in the access attempt into a form the risk assessment engine 27 can use, then requests 69 risk assessment for the access attempt from the risk assessment engine 27.
The risk assessment engine 27 compares 71 the access attempt to access event clusters 93 in the entity's profile and notices that the location and time are not usual for the user 35. The engine 27 calculates 71 risk scores for the access attempt just as it would for an executed access event. In this case, it calculates high risk scores.
The risk assessment engine 27 returns 73 the high scores to the risk assessment service 31.
The risk assessment service 31 returns 75 the high scores to the on-demand threat remediation engine 33.
In the on-demand threat remediation engine 33, the high scores returned to the engine 33 triggers an access denial that the engine 33 sends 77 to the access control service 11.
The access control service 11 denies 79 the user's 35 log-in request.
The event reporting agent 15 reports 81 the denied access event to the event ingestion service 25.
From this point on, the denied access event goes through an embodiment of the invention just as any other event would, as described previously. The event is recorded in the risk assessment engine's 27 multi-dimensional array 83 and passes through the streaming threat remediation engine 29 for possible action on the event.
Other Implementations of the InventionThe invention may be implemented in alternative ways. An embodiment of the invention may, for example, run within an organization's private network, or across large interconnected networks. Embodiments of the invention may locate components in different locations that may be together within a core or scattered across various locations, and they may consolidate multiple components within a single component that performs the same functions as the consolidated components. Embodiments of the invention may use methods other than multi-dimensional arrays 83 to assess an event's possible threat.
An embodiment of the invention may be a machine-readable medium having stored thereon instructions which cause a processor to perform operations as described above. In other embodiments/the operations might be performed by specific hardware components that contain hardwired logic. Those operations might alternatively be performed by any combination of programmed computer components and custom hardware components.
A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by any type of processing device.
Although specific examples of how the invention may be implemented are described, the invention is not limited by the specified examples, and is limited only by the scope of the following claims.
Claims
1. A system including an event reporting agent, an access control service, a directory service, and an administrative access portal for detecting and remediating fraudulent attempts to access a network, said system comprising:
- a) an event ingestion service which i) receives data corresponding to an event from said event reporting agent, ii) filters out malformed or irrelevant events corresponding to said received event data, and iii) prepares each event's data by deleting unnecessary event data and converting remaining event data, if necessary, into values;
- b) a risk assessment engine which i) receives said values from said event ingestion service and builds and periodically updates an entity profile for each entity using said values corresponding to said filtered and prepared event data for that entity, and ii) accepts requests from one of a streaming threat remediation engine and a risk assessment service to perform a risk assessment for an event by comparing said filtered and prepared event data to the entity's entity profile and returning a result of said risk assessment to one of said streaming threat remediation engine and risk assessment service;
- c) said streaming threat remediation engine which receives said filtered and prepared event data from said risk assessment engine and applies an ordered sequence of rules to said filtered and prepared event data, each rule testing each event for conditions that may require action, which action said streaming threat remediation engine then initiates if required;
- d) said risk assessment service which i) accepts authenticated connections from an on demand threat remediation engine receives risk assessment requests for events or attempted events from said on demand threat remediation engine requests said risk assessments from said risk assessment engine, and iv) returns said risk assessments for each request to said on demand threat remediation engine;
- e) said on-demand threat remediation engine which i) receives data corresponding to requests for risk assessment of an external event or attempted external event and ii) applies said ordered sequence of rules to said data corresponding to said external event or attempted external event, each rule testing each external event or attempted external event for conditions that may require action, which action said on-demand threat remediation engine then initiates if required.
2. The system defined by claim 1 wherein said request for said risk assessment for said event from said risk assessment engine initiated by said streaming threat remediation engine, is followed by a further application of rules within said ordered sequence of rules for testing each event for conditions that may require action, which action said streaming threat remediation engine then initiates if required.
3. The system defined by claim 1 wherein said request for said risk assessment for said event from said risk assessment service initiated by said on-demand threat remediation engine for said event or attempted event, is followed by a further application of rules within said ordered sequence of rules for testing each event or attempted event for conditions that may require action, which action said on-demand threat remediation engine then initiates if required.
4. The system defined by claim 1 wherein said requests for risk assessment of said external event or attempted external event are initiated by said access control service, or said directory service.
5. A method for detecting and remediating risky entity activity in a network based on an executed entity event comprising:
- a) sending an entity event to an event ingestion service which determines (41) whether said entity event is an event appropriate for analysis;
- b) if said event is appropriate for analysis, said entity event ingestion service converting said entity event's data into a form usable for analysis and sending said converted entity event to a risk assessment engine;
- c) said risk assessment engine receiving said converted entity event and using it to build and periodically update an entity profile for each entity by adding said converted entity event to previously converted entity events for the same entity;
- d) said risk assessment engine passing each of said received entity events along to a streaming threat remediation engine;
- e) said streaming threat remediation engine evaluating said entity event through an ordered sequence of rules;
- f) if said rule sequence detects a condition in said entity event that requires risk assessment, then said streaming threat remediation engine requesting a risk assessment for said entity event from said risk assessment engine;
- g) said risk assessment engine calculating a risk assessment score and confidence score by comparing said entity event to said entity profile and then providing said risk assessment score and confidence score to said streaming threat remediation engine;
- h) if said streaming threat remediation engine determines that said risk assessment score and confidence score constitute a threat to the network, then instructing an access control service to take appropriate action to mitigate said entity's activity.
6. The method defined by claim 5 wherein said requesting for said risk assessment is followed by a further application of rules within said ordered sequence of rules testing each event for conditions that may require action, which action said streaming threat remediation engine then initiates if required.
7. The method defined by claim 6 wherein said action initiated by said streaming threat remediation engine is instructing said access control service to take appropriate action to mitigate said entity's activity.
8. The method defined by claim 5 wherein said requesting for risk assessment of said executed entity event is initiated by said access control service, or a directory service.
9. A method for detecting and remediating a fraudulent attempt to access a network based on an attempted entity event comprising:
- a) sending an attempted entity event to an on-demand threat remediation engine which determines whether said entity event is an event which requires a threat assessment prior to being authorized;
- b) if said entity event requires said threat assessment, said on-demand threat remediation engine passing said entity event along with a risk assessment request to a risk assessment service;
- c) said risk assessment service converting values of said entity event into a form usable for risk assessment and passing said converted entity event to a risk assessment engine with a request to assess risk for said event;
- c) said risk assessment engine receiving said converted entity event and comparing said entity event to an entity profile created for each entity with reported entity events, forming a risk assessment score and confidence score and providing said risk assessment score and confidence score to said risk assessment service;
- d) said risk assessment service sending said risk assessment score and confidence score to said on-demand threat remediation engine.
10. The method defined by claim 9 wherein said request for said risk assessment is followed by a further application of rules within said ordered sequence of rules testing each event for conditions that may require action, which action said on-demand threat remediation engine then initiates if required.
11. The method defined by claim 9 wherein said action initiated by said on-demand threat remediation engine is instructing said access control service to take appropriate action to mitigate said attempted entity event.
12. The method defined by claim 9 wherein said request for risk assessment of said attempted entity event is initiated by said access control service or a directory service.
Type: Application
Filed: Sep 13, 2017
Publication Date: Mar 14, 2019
Inventors: Yanlin Wang (Cupertino, CA), Weizhi Li (Sunnyvale, CA)
Application Number: 15/703,943