[go: nahoru, domu]

WO2018217259A2 - Peer-based abnormal host detection for enterprise security systems - Google Patents

Peer-based abnormal host detection for enterprise security systems Download PDF

Info

Publication number
WO2018217259A2
WO2018217259A2 PCT/US2018/019829 US2018019829W WO2018217259A2 WO 2018217259 A2 WO2018217259 A2 WO 2018217259A2 US 2018019829 W US2018019829 W US 2018019829W WO 2018217259 A2 WO2018217259 A2 WO 2018217259A2
Authority
WO
WIPO (PCT)
Prior art keywords
host
events
behavior
hosts
event
Prior art date
Application number
PCT/US2018/019829
Other languages
French (fr)
Other versions
WO2018217259A3 (en
Inventor
Zhengzhang CHEN
Luan Tang
Zhichun Li
Cheng Cao
Original Assignee
Nec Laboratories America, Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/902,432 external-priority patent/US10476754B2/en
Priority claimed from US15/902,369 external-priority patent/US10476753B2/en
Priority claimed from US15/902,318 external-priority patent/US10367842B2/en
Application filed by Nec Laboratories America, Inc filed Critical Nec Laboratories America, Inc
Publication of WO2018217259A2 publication Critical patent/WO2018217259A2/en
Publication of WO2018217259A3 publication Critical patent/WO2018217259A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1425Traffic logging, e.g. anomaly detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/552Detecting local intrusion or implementing counter-measures involving long-term monitoring or reporting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection

Definitions

  • the present invention relates to host-level system behavior analysis and, more particularly, to the analysis of system behavior in comparison to systems having similar behavioral profiles.
  • Enterprise networks are key systems in corporations and they carry the vast majority of mission-critical information. As a result of their importance, these networks are often the targets of attack. The behavior of individual systems within enterprise networks is therefore frequently monitored and analyzed to detect anomalous behavior as a step toward detecting attacks.
  • a method for determining a risk level of a host in a network includes modeling a target host's behavior based on historical events recorded at the target host. One or more original peer hosts having behavior similar to the target host's behavior are determined. An anomaly score for the target host is determined using a processor based on how the target host's behavior changes relative to behavior of the one or more original peer hosts over time. A security management action is performed based on the anomaly score.
  • a system for determining a risk level of a host in a network includes a host behavior module configured to model a target host's behavior based on historical events recorded at the target host.
  • a peer host module is configured to determine one or more original peer hosts having behavior similar to the target host's behavior.
  • An anomaly score module includes a processor configured to determine an anomaly score for the target host based on how the target host's behavior changes relative to behavior of the one or more original peer hosts over time.
  • a security module is configured to perform a security management action based on the anomaly score.
  • a method for modeling host behavior in a network includes determining a first probability function for observing each of a set of process-level events at a first host based on embedding vectors for the first event and the first host.
  • a second probability function is determined for the first host issuing each of a set of network- level events connecting to a second host based on embedding vectors for the first host and the second host.
  • the first and second probability functions are maximized to determine a set of likely process-level and network-level events for the first host.
  • a security action is performed based on the modeled host behavior.
  • a system for modeling host behavior in a network includes a host behavior module that includes a processor configured to determine a first probability function for observing each of a set of process-level events at a first host based on embedding vectors for the first event and the first host, to determine a second probability function for the first host issuing each of a set of network-level events connecting to a second host based on embedding vectors for the first host and the second host, and to maximize the first and second probability functions to determine a set of likely process-level and network-level events for the first host.
  • a security module is configured to perform a security action based on the modeled host behavior.
  • a method for detecting host community include modeling a target host's behavior based on historical events recorded at the target host.
  • One or more original peer hosts having behavior similar to the target host's behavior are found by determining a distance in a latent space that embeds the historical events between events of the target host and events of the one or more original peer hosts.
  • a security management action is performed based on behavior of the target host and the determined one or more original peer hosts.
  • a system for detecting host community includes a host behavior module configured to model a target host's behavior based on historical events recorded at the target host.
  • a peer host module includes a processor configured to determine one or more original peer hosts having behavior similar to the target host's behavior by determining a distance in a latent space that embeds the historical events between events of the target host and events of the one or more original peer hosts.
  • a security module is configured to perform a security management action based on behavior of the target host and the determined one or more original peer hosts.
  • FIG. 1 is a block/flow diagram directed to an automatic security intelligence system architecture in accordance with the present principles
  • FIG. 2 is a block/flow diagram directed to an intrusion detection engine architecture in accordance with the present principles
  • FIG. 3 is a block/flow diagram directed to a host level analysis module architecture in accordance with the present principles
  • FIG. 4 is a block/flow diagram directed to a method for performing host level risk analysis in accordance with the present principles
  • FIG. 5 is a block diagram directed to an intrusion detection system architecture in accordance with the present principles.
  • FIG. 6 is a block diagram directed to a processing system in accordance with the present system.
  • Embodiments of the present invention provide host-level anomaly detection for large systems.
  • surveillance agents may be deployed on each host to automatically monitor the system's activity such as, e.g., active processes, file accesses, and socket connections.
  • the present embodiments use this monitored information to identify the anomaly status of each monitored host system.
  • the present embodiments first determine a behavior profile for each host based on that host's previous recorded behavior. Other hosts having similar behavior profiles (referred to herein as “peer hosts”) are then identified and monitored together going forward. An anomaly score for each host can then be determined based on how closely it tracks the behavior of its peer hosts, with greater divergence from the behavior of the peer hosts corresponding to a larger anomaly score. This reflects the understanding that hosts which have similar behavior histories can be expected to behave similarly in the future. The present embodiments can thereby determine the anomaly status of a given host in an unsupervised manner, with a large volume of data, and in a noisy, dynamic environment that defies typical statistical assumptions.
  • an automatic security intelligence system (ASI) architecture is shown.
  • the ASI system includes three major components: an agent 10 is installed in each machine of an enterprise network to collect operational data; backend servers 200 receive data from the agents 10, pre- process the data, and sends the pre-processed data to an analysis server 30; and an analysis server 30 that runs the security application program to analyze the data.
  • Each agent 10 includes an agent manager 11, an agent updater 12, and agent data 13, which in turn may include information regarding active processes, file access, net sockets, number of instructions per cycle, and host information.
  • the backend server 20 includes an agent updater server 21 and surveillance data storage.
  • Analysis server 30 includes intrusion detection 31, security policy compliance assessment 32, incident backtrack and system recovery 33, and centralized threat search and query 34.
  • intrusion detection 31 There are five modules in an intrusion detection engine: a data distributor 41 that receives the data from backend server 20 and distributes the corresponding to network level module 42 and host level module 43; network analysis module 42 that processes the network communications (including TCP and UDP) and detects abnormal communication events; host level analysis module 43 that processes host level events, including user-to-process events, process-to-file events, and user-to- registry events; anomaly fusion module 44 that integrates network level anomalies and host level anomalies and refines the results for trustworthy intrusion events; alert ranking and attack scenario reconstruction module 46 that uses both temporal and content correlations to rank alerts and reconstruct attack scenarios; and visualization module 45 that outputs the detection results to end users.
  • a data distributor 41 that receives the data from backend server 20 and distributes the corresponding to network level module 42 and host level module 43
  • network analysis module 42 that processes the network communications (including TCP and UDP) and detects abnormal communication events
  • host level analysis module 43 that processes host level events, including user-to-process events, process
  • the detectors that feed the intrusion detection system 31 may report alerts with very different semantics. For example, network detectors monitor the topology of network connections and report an alert if a suspicious client suddenly connects to a stable server. Meanwhile, process-file detectors may generate an alert if an unseen process accesses a sensitive file.
  • the intrusion detection system 31 integrates alerts regardless of their respective semantics to overcome the problem of heterogeneity.
  • Process-to file anomaly detection 302 takes host level process-to-file events from the data distributor 41 as input and discovers the abnormal process-to-file events as an output.
  • User-to-process anomaly detection 304 takes all streaming process events as input, models each user's behavior at the process level, and identifies the suspicious processes run by each user as output.
  • Process-to-process anomaly detection 306 takes all streaming process events as input, models each process's execution behavior, and identifies the suspicious process execution event.
  • Process signature anomaly detection 308 takes process names and signatures as input and detects processes with suspicious signatures.
  • Malicious process path discovery 310 takes current active processes as path starting points and tracks all the possible process paths by combing the incoming and previous events in a user-defined time window.
  • the present embodiments focus specifically on a host level risk analysis that determines, on a per-host basis, an anomaly score for the host that characterizes how much the host has deviated from expected behavior.
  • Block 402 performs host level behavioral modeling. This modeling may be performed based solely on historical events recorded at the host.
  • Block 404 leverages the results of host level behavioral modeling to find a group of peer hosts that share similar behaviors with the host in question. It should be noted that a peer host is determined solely on the basis of host behavior rather than on host role or network connections.
  • Block 406 then identifies anomalies in the behavior of the host based on the ongoing behaviors of the peer hosts. This may be performed over a period of time (e.g., a week).
  • the historical events used by host level behavior modeling 402 may include, for example, network events and process-level events.
  • ASI agents are generally light-weight.
  • Table 1 shows an exemplary list of network events from 11 :30 am to 12:05 am in on February 29 th , 2016. These network events can be classified to two categories based on the dst-ip: if the dst-ip is in the range of enterprise network's IP addresses (2.15.xx.xx), the network event is an inside connection between two hosts of the enterprise network. If the dst-ip is not in the range, it is an outside connection between an internal host and an external host. In Table 1, e ⁇ , e 3 , e 5 and e 6 are inside connections and e 2 and e 4 are outside connections.
  • the object can be a file, another process or a socket that contains the connection information.
  • process-level events can be classified into one of three categories: process-file events, process-socket events, and process-process events.
  • Table 2 shows an exemplary list of process-level events from 11 :30AM to
  • the IP address is used as an identifier for the hosts, e and e 5 are process-file events, e 3 and e 4 are process-socket events, and e 2 is a process-process event.
  • the network events can be seen as external events and the process-level events can be treated as internal events.
  • internal events capture a single host's local behaviors and external events capture the interaction behaviors between multiple hosts.
  • n events E h ⁇ e 0 , e x , n _- ⁇ is monitored from h, including both network events and process-level events.
  • the network event data can be expressed as a collection of triples ⁇ h, e 0 , h Q '>, ⁇ h, e 1 , h 1 '>, ... , ⁇ h, e i ⁇ 3 ⁇ 4 > ⁇ .
  • Process-level event information can be expressed as a set of pairs ⁇ h, e Q '>, ⁇ h, e ⁇ >, ..., ⁇ h, ⁇ > ⁇ .
  • a host is then modeled as the context of all its process-level events and as the context of all hosts reached by its network events. All pairs of hosts and events are embedded into a common latent space where their co-occurrences are preserved. This is performed using text embedding methods that capture syntactic and semantic word relationships, for example by unsupervised learning of word embeddings by exploiting word co-occurrences.
  • a process-level event e on the host h can be modeled as conditional probability of a host h given an event e, i.e., the probability that the event e is observed from the host h, via the following softmax function:
  • v h and v e are the embedding vectors for the host h and the event e, respectively, and H is the set of all hosts.
  • a network event can be modeled as P(h' ⁇ h, e)— the conditional probability of a host h ' given a host h and an event e, i.e., the probability that the host h issues a network event e that connects to the host h
  • the network event conditional probability can be computed by:
  • v h and v h are the embedding vectors for the hosts h and h', respectively, and H is the set of all hosts.
  • ⁇ 1 ⁇ 4 - / , ⁇ g a( v h - v n ) - ⁇ og a( v h - v h )
  • D P is the collection of pairs of pairs of process-level events
  • D N is the set of triples of network-level events
  • (h, e, h) is a negative sample for network-level events
  • h and h are two hosts in the negative network-level sample.
  • Dp and D N ' are the two sets of negative samples constructed by certain sampling scheme for process-level events and network-level events, respectively. Concretely, for each co-occurrence Qi, e) G D p , k noises (]3 ⁇ 4, e), (h 2 , e), . . .
  • block 404 creates clusters of hosts, taking advantage of the fact that the learned embeddings for the hosts are their latent behavior representations in that space.
  • the following pseudo-code shows details of finding a group of peer hosts:
  • the community centroids C ⁇ c ⁇ , c 2 , ... , c K ) can be determined.
  • the embedding vectors of hosts and events V H and V E
  • the community centroids C mutually enhance one another.
  • the community centroids C and community assignment l(y H ) are fixed and the best embedding vectors V H and V E are learned.
  • the predicted embedded vectors V H and V E are learned and the best values for C and l(y H ) are learned.
  • the O c term becomes a generalization term that makes the embedded vectors closer to their corresponding community centroid.
  • the optimization of the reduced objective function for the first step is as follows: given two sets of data samples, D P and D N , in each iteration, two mini-batches of process-level events and network-level events are sampled as D bp and D bN . Then a set of negative samples D b ' and D b ' are generated according to a noise distribution and two parameters that control the size of the negative samples, k P and k N . [0061] Without a rule of thumb for choosing an optimal noise distribution, several may be tested empirically to select the best one when sampling.
  • noise distribution that has been found to be effective is a distribution where the sampling is calibrated in the probability inversely proportional to the frequency of co-occurrence.
  • a suitable value for k P and k N is 5.
  • a gradient descent can then be used over V H and V E to get the best embedded vectors.
  • centroid c is then recalculated for all hosts in each group.
  • the values of the centroids are updated, taken as the geometric mean of the points that have the same label as the centroid.
  • the first and second steps are repeated iteratively until all the hosts can no longer change groups.
  • Block 406 then uses the detected host peer groups to assess the host's anomaly level.
  • the peer groups reflect behavioral similarity between all hosts, based on a potentially very large corpus of historical data. Reviewing the host status over a particular time period, that host should still behave similarly to its peer hosts. If not, block 406 determines that the host has a suspicious anomaly status.
  • PE h is the set of peers found in the past and PE'(h) is the set of peers identified later on. From an event perspective, using the embedding vectors across all events collected from all hosts, it can be determined how different the events are between peer hosts:
  • E h is the set of events monitored from host h and cos(e, e') is the cosine similarity between the embedding vectors of event e and event e '.
  • cos(e, e') is the cosine similarity between the embedding vectors of event e and event e '.
  • the distance between the event and its closest event on each peer host of h is computed, in terms of cosine similarity.
  • the average over all peer hosts is determined and the average of all events from one host is returned as the score for this host.
  • Embodiments described herein may be entirely hardware, entirely software or including both hardware and software elements.
  • the present invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
  • Embodiments may include a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system.
  • a computer- usable or computer readable medium may include any apparatus that stores, communicates, propagates, or transports the program for use by or in connection with the instruction execution system, apparatus, or device.
  • the medium can be magnetic, optical, electronic, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium.
  • the medium may include a computer-readable storage medium such as a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk, etc.
  • Each computer program may be tangibly stored in a machine-readable storage media or device (e.g., program memory or magnetic disk) readable by a general or special purpose programmable computer, for configuring and controlling operation of a computer when the storage media or device is read by the computer to perform the procedures described herein.
  • the inventive system may also be considered to be embodied in a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner to perform the functions described herein.
  • a data processing system suitable for storing and/or executing program code may include at least one processor coupled directly or indirectly to memory elements through a system bus.
  • the memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code to reduce the number of times code is retrieved from bulk storage during execution.
  • I/O devices including but not limited to keyboards, displays, pointing devices, etc. may be coupled to the system either directly or through intervening I/O controllers.
  • Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks.
  • Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
  • the system 500 includes a hardware processor 502 and memory 504.
  • a number of functional modules are included that may, in some embodiments, be implemented as software that is stored in memory 504 and that is executed by hardware processor 502.
  • the functional modules may be implemented as one or more discrete hardware components in the form of, e.g., application specific integrated chips or field programmable gate arrays.
  • Still other embodiments may use both hardware and software forms for different functional modules, or may split the functions of a single functional module across both hardware and software components.
  • Host logging module 506 collects information from agents at various hosts and stores the information in memory 504. This can include historical event information, including process-level events and network events, as well as real-time event information.
  • Host behavior module 508 determines a host behavior profile for each host based on historical event information.
  • Peer host module 510 determines each host's peers based on a behavioral similarity. In particular, peer host module 510 maps each host's behavior profile into an embedding space and clusters hosts according to proximity in that space. Anomaly score module 512 then monitors incoming events over a period of time to reevaluate the host's behavior relative to that of its peers.
  • anomaly score module 512 assigns a high value to the host's anomaly score. If the host's behavior conforms to the behavior of its peer hosts, anomaly score module 512 assigns a low value to the host's anomaly score.
  • a security module 514 Based on the outcome of the anomaly score module 512, a security module 514 performs manual or automated security actions in response to the ranked alerts and alert patterns. In particular, the security module 514 may have rules and policies that trigger when an anomaly score for a host exceeds a threshold.
  • the security module 514 may automatically trigger security management actions such as, e.g., shutting down devices, stopping or restricting certain types of network communication, raising alerts to system administrators, changing a security policy level, and so forth.
  • security management actions such as, e.g., shutting down devices, stopping or restricting certain types of network communication, raising alerts to system administrators, changing a security policy level, and so forth.
  • the security module 514 may also accept instructions from a human operator to manually trigger certain security actions in view of analysis of the anomalous host.
  • the processing system 600 includes at least one processor (CPU) 604 operatively coupled to other components via a system bus 602.
  • a first storage device 622 and a second storage device 624 are operatively coupled to system bus 602 by the I/O adapter 620.
  • the storage devices 622 and 624 can be any of a disk storage device (e.g., a magnetic or optical disk storage device), a solid state magnetic device, and so forth.
  • the storage devices 622 and 624 can be the same type of storage device or different types of storage devices.
  • a speaker 632 is operatively coupled to system bus 602 by the sound adapter 630.
  • a transceiver 642 is operatively coupled to system bus 602 by network adapter 640.
  • a display device 662 is operatively coupled to system bus 602 by display adapter 660.
  • a first user input device 652, a second user input device 654, and a third user input device 656 are operatively coupled to system bus 602 by user interface adapter 650.
  • the user input devices 652, 654, and 656 can be any of a keyboard, a mouse, a keypad, an image capture device, a motion sensing device, a microphone, a device incorporating the functionality of at least two of the preceding devices, and so forth. Of course, other types of input devices can also be used, while maintaining the spirit of the present principles.
  • the user input devices 652, 654, and 656 can be the same type of user input device or different types of user input devices.
  • the user input devices 652, 654, and 656 are used to input and output information to and from system 600.
  • processing system 600 may also include other elements (not shown), as readily contemplated by one of skill in the art, as well as omit certain elements.
  • various other input devices and/or output devices can be included in processing system 600, depending upon the particular implementation of the same, as readily understood by one of ordinary skill in the art.
  • various types of wireless and/or wired input and/or output devices can be used.
  • processors in various configurations can also be utilized as readily appreciated by one of ordinary skill in the art.
  • processing system 600 is readily contemplated by one of ordinary skill in the art given the teachings of the present principles provided herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Debugging And Monitoring (AREA)
  • Computer And Data Communications (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Systems and methods for determining a risk level of a host in a network include modeling (402) a target host's behavior based on historical events recorded at the target host. One or more original peer hosts having behavior similar to the target host's behavior are determined (404). An anomaly score for the target host is determined (406) based on how the target host's behavior changes relative to behavior of the one or more original peer hosts over time. A security management action is performed based on the anomaly score.

Description

PEER-BASED ABNORMAL HOST DETECTION FOR ENTERPRISE SECURITY
SYSTEMS
RELATED APPLICATION INFORMATION
[0001] This application claims priority to U.S. Provisional Application serial number 62/463,976, filed on February 27, 2017, U.S. Patent Application Serial No. 15/902,318 filed February 22, 2018, U.S. Patent Application Serial No. 15/902,369 filed February 22, 2018 and U.S. Patent Application Serial No. 15/902,432 filed February 22, 2018 incorporated herein by reference in their entirety.
BACKGROUND
Technical Field
[0002] The present invention relates to host-level system behavior analysis and, more particularly, to the analysis of system behavior in comparison to systems having similar behavioral profiles.
Description of the Related Art
[0003] Enterprise networks are key systems in corporations and they carry the vast majority of mission-critical information. As a result of their importance, these networks are often the targets of attack. The behavior of individual systems within enterprise networks is therefore frequently monitored and analyzed to detect anomalous behavior as a step toward detecting attacks.
SUMMARY
[0004] A method for determining a risk level of a host in a network includes modeling a target host's behavior based on historical events recorded at the target host. One or more original peer hosts having behavior similar to the target host's behavior are determined. An anomaly score for the target host is determined using a processor based on how the target host's behavior changes relative to behavior of the one or more original peer hosts over time. A security management action is performed based on the anomaly score.
[0005] A system for determining a risk level of a host in a network includes a host behavior module configured to model a target host's behavior based on historical events recorded at the target host. A peer host module is configured to determine one or more original peer hosts having behavior similar to the target host's behavior. An anomaly score module includes a processor configured to determine an anomaly score for the target host based on how the target host's behavior changes relative to behavior of the one or more original peer hosts over time. A security module is configured to perform a security management action based on the anomaly score.
[0006] A method for modeling host behavior in a network includes determining a first probability function for observing each of a set of process-level events at a first host based on embedding vectors for the first event and the first host. A second probability function is determined for the first host issuing each of a set of network- level events connecting to a second host based on embedding vectors for the first host and the second host. The first and second probability functions are maximized to determine a set of likely process-level and network-level events for the first host. A security action is performed based on the modeled host behavior.
[0007] A system for modeling host behavior in a network includes a host behavior module that includes a processor configured to determine a first probability function for observing each of a set of process-level events at a first host based on embedding vectors for the first event and the first host, to determine a second probability function for the first host issuing each of a set of network-level events connecting to a second host based on embedding vectors for the first host and the second host, and to maximize the first and second probability functions to determine a set of likely process-level and network-level events for the first host. A security module is configured to perform a security action based on the modeled host behavior.
[0008] A method for detecting host community include modeling a target host's behavior based on historical events recorded at the target host. One or more original peer hosts having behavior similar to the target host's behavior are found by determining a distance in a latent space that embeds the historical events between events of the target host and events of the one or more original peer hosts. A security management action is performed based on behavior of the target host and the determined one or more original peer hosts.
[0009] A system for detecting host community includes a host behavior module configured to model a target host's behavior based on historical events recorded at the target host. A peer host module includes a processor configured to determine one or more original peer hosts having behavior similar to the target host's behavior by determining a distance in a latent space that embeds the historical events between events of the target host and events of the one or more original peer hosts. A security module is configured to perform a security management action based on behavior of the target host and the determined one or more original peer hosts.
[0010] These and other features and advantages will become apparent from the following detailed description of illustrative embodiments thereof, which is to be read in connection with the accompanying drawings.
BRIEF DESCRIPTION OF DRAWINGS [0011] The disclosure will provide details in the following description of preferred embodiments with reference to the following figures wherein:
[0012] FIG. 1 is a block/flow diagram directed to an automatic security intelligence system architecture in accordance with the present principles;
[0013] FIG. 2 is a block/flow diagram directed to an intrusion detection engine architecture in accordance with the present principles;
[0014] FIG. 3 is a block/flow diagram directed to a host level analysis module architecture in accordance with the present principles;
[0015] FIG. 4 is a block/flow diagram directed to a method for performing host level risk analysis in accordance with the present principles;
[0016] FIG. 5 is a block diagram directed to an intrusion detection system architecture in accordance with the present principles; and
[0017] FIG. 6 is a block diagram directed to a processing system in accordance with the present system.
[0018] DETAILED DESCRIPTION OF PREFERRED EMB ODEVIENT S
[0019] Embodiments of the present invention provide host-level anomaly detection for large systems. In a large security system, surveillance agents may be deployed on each host to automatically monitor the system's activity such as, e.g., active processes, file accesses, and socket connections. The present embodiments use this monitored information to identify the anomaly status of each monitored host system.
[0020] Toward that end, the present embodiments first determine a behavior profile for each host based on that host's previous recorded behavior. Other hosts having similar behavior profiles (referred to herein as "peer hosts") are then identified and monitored together going forward. An anomaly score for each host can then be determined based on how closely it tracks the behavior of its peer hosts, with greater divergence from the behavior of the peer hosts corresponding to a larger anomaly score. This reflects the understanding that hosts which have similar behavior histories can be expected to behave similarly in the future. The present embodiments can thereby determine the anomaly status of a given host in an unsupervised manner, with a large volume of data, and in a noisy, dynamic environment that defies typical statistical assumptions.
[0021] Referring now in detail to the figures in which like numerals represent the same or similar elements and initially to FIG. 1, an automatic security intelligence system (ASI) architecture is shown. The ASI system includes three major components: an agent 10 is installed in each machine of an enterprise network to collect operational data; backend servers 200 receive data from the agents 10, pre- process the data, and sends the pre-processed data to an analysis server 30; and an analysis server 30 that runs the security application program to analyze the data.
[0022] Each agent 10 includes an agent manager 11, an agent updater 12, and agent data 13, which in turn may include information regarding active processes, file access, net sockets, number of instructions per cycle, and host information. The backend server 20 includes an agent updater server 21 and surveillance data storage. Analysis server 30 includes intrusion detection 31, security policy compliance assessment 32, incident backtrack and system recovery 33, and centralized threat search and query 34.
[0023] Referring now to FIG. 2, additional detail on intrusion detection 31 is shown. There are five modules in an intrusion detection engine: a data distributor 41 that receives the data from backend server 20 and distributes the corresponding to network level module 42 and host level module 43; network analysis module 42 that processes the network communications (including TCP and UDP) and detects abnormal communication events; host level analysis module 43 that processes host level events, including user-to-process events, process-to-file events, and user-to- registry events; anomaly fusion module 44 that integrates network level anomalies and host level anomalies and refines the results for trustworthy intrusion events; alert ranking and attack scenario reconstruction module 46 that uses both temporal and content correlations to rank alerts and reconstruct attack scenarios; and visualization module 45 that outputs the detection results to end users.
[0024] The detectors that feed the intrusion detection system 31 may report alerts with very different semantics. For example, network detectors monitor the topology of network connections and report an alert if a suspicious client suddenly connects to a stable server. Meanwhile, process-file detectors may generate an alert if an unseen process accesses a sensitive file. The intrusion detection system 31 integrates alerts regardless of their respective semantics to overcome the problem of heterogeneity.
[0025] Referring now to FIG. 3, a method for host level analysis is shown. The present embodiments provide particular focus on the operation of host level analysis module 43. Process-to file anomaly detection 302 takes host level process-to-file events from the data distributor 41 as input and discovers the abnormal process-to-file events as an output. User-to-process anomaly detection 304 takes all streaming process events as input, models each user's behavior at the process level, and identifies the suspicious processes run by each user as output. Process-to-process anomaly detection 306 takes all streaming process events as input, models each process's execution behavior, and identifies the suspicious process execution event.
[0026] Process signature anomaly detection 308 takes process names and signatures as input and detects processes with suspicious signatures. Malicious process path discovery 310 takes current active processes as path starting points and tracks all the possible process paths by combing the incoming and previous events in a user-defined time window. The present embodiments focus specifically on a host level risk analysis that determines, on a per-host basis, an anomaly score for the host that characterizes how much the host has deviated from expected behavior.
[0027] Referring now to FIG. 4, a method of performing host level risk analysis 312 is shown. Block 402 performs host level behavioral modeling. This modeling may be performed based solely on historical events recorded at the host. Block 404 then leverages the results of host level behavioral modeling to find a group of peer hosts that share similar behaviors with the host in question. It should be noted that a peer host is determined solely on the basis of host behavior rather than on host role or network connections. Block 406 then identifies anomalies in the behavior of the host based on the ongoing behaviors of the peer hosts. This may be performed over a period of time (e.g., a week).
[0028] The historical events used by host level behavior modeling 402 may include, for example, network events and process-level events. A network event e is defined herein as a 7-tuple, e = <src-ip, src-port, dst-ip, dst-port, connecting-process, protocol-num, timestamp>, where src-ip and src-port are the IP address and port of the source host, dst-ip and dst-port are the IP and port of the destination host, connecting-process is the process that initializes the connection, protocol-num indicates the protocol of the connection, and timestamp records the connection time. It should be noted that ASI agents are generally light-weight. To reduce resource consumption and to maintain privacy, the agent generally does not collect the content and traffic size of network connections, making that information unavailable for analysis. [0029] Table 1 shows an exemplary list of network events from 11 :30 am to 12:05 am in on February 29th, 2016. These network events can be classified to two categories based on the dst-ip: if the dst-ip is in the range of enterprise network's IP addresses (2.15.xx.xx), the network event is an inside connection between two hosts of the enterprise network. If the dst-ip is not in the range, it is an outside connection between an internal host and an external host. In Table 1, e\, e3, e5 and e6 are inside connections and e2 and e4 are outside connections.
Figure imgf000010_0001
Tab e 1
[0030] A process-level event e is a 5-tuple, e = <host-id, user-id, process, object, timestamp>, where host-id indicates the host where the agent is installed, user-id identifies the user who runs the process, timestamp records the event time, process is the subject of the event and object is the object of the event. The object can be a file, another process or a socket that contains the connection information. According to an object's type, process-level events can be classified into one of three categories: process-file events, process-socket events, and process-process events.
[0031] Table 2 shows an exemplary list of process-level events from 11 :30AM to
12:05AM on February 29 , 2016. The IP address is used as an identifier for the hosts, e and e5 are process-file events, e3 and e4 are process-socket events, and e2 is a process-process event.
Figure imgf000011_0001
[0032] The network events can be seen as external events and the process-level events can be treated as internal events. In general, internal events capture a single host's local behaviors and external events capture the interaction behaviors between multiple hosts. In particular, given a host h E H, a set of n events Eh = {e0, ex, n_-^ is monitored from h, including both network events and process-level events. The network event data can be expressed as a collection of triples {<h, e0, hQ'>, <h, e1, h1'>, ... , <h, ei} ¾ >}. Process-level event information can be expressed as a set of pairs {<h, eQ'>, <h, e^>, ..., <h, β >} .
[0033] A host is then modeled as the context of all its process-level events and as the context of all hosts reached by its network events. All pairs of hosts and events are embedded into a common latent space where their co-occurrences are preserved. This is performed using text embedding methods that capture syntactic and semantic word relationships, for example by unsupervised learning of word embeddings by exploiting word co-occurrences. [0034] In particular, a process-level event e on the host h can be modeled as
Figure imgf000012_0001
conditional probability of a host h given an event e, i.e., the probability that the event e is observed from the host h, via the following softmax function:
P(h\e) = —— where vh and ve are the embedding vectors for the host h and the event e, respectively, and H is the set of all hosts. Similarly, a network event can be modeled as P(h'\h, e)— the conditional probability of a host h ' given a host h and an event e, i.e., the probability that the host h issues a network event e that connects to the host h The network event conditional probability can be computed by:
∑ft£ E// expOft. νΛ)
where vh and vh, are the embedding vectors for the hosts h and h', respectively, and H is the set of all hosts.
[0035] Given a collection of n events
Figure imgf000012_0002
..., 6η--1 } monitored from all hosts in H, the embedding vectors for both hosts and events are set so that the above- mentioned two probability functions are maximized. This optimization model can be expensive to solve due to the denominators of both equation summing over all hosts of H. Thus, a negative sampling is applied. In general, to avoid dealing with too many hosts to be iterated, only a sample of them are updated. All observed co-occurred host-event pairs from the input data are kept and a few "noisy pairs" are artificially sampled. The noisy pairs are not supposed to co-occur, so the conditional probabilities should be low. Hence, negative sampling offers an approximate update that is computationally efficient, since the calculation now only scales with the size of noise. This provides the following objective functions to be minimized: 0i = - / ^g a( vh - ve ) - ) \og a( vh> ve> t—iQi,e)EDp t—iQi', e')EDp'
<¼ = - / , Λ g a( vh - vn ) - \og a( vh - vh )
*—i(h,e,h)EDN ^—>(h,e,h)eDN'
where σ is the sigmoid function, DP is the collection of pairs of pairs of process-level events, DN is the set of triples of network-level events, (h, e, h) is a negative sample for network-level events, and h and h are two hosts in the negative network-level sample. Dp and DN' are the two sets of negative samples constructed by certain sampling scheme for process-level events and network-level events, respectively. Concretely, for each co-occurrence Qi, e) G Dp, k noises (]¾, e), (h2, e), . . . , (hk, e) are sampled, where {h^ h2,—hk} is drawn according to a noise distribution. Lacking guidance regarding the negative sampling, several are empirically tested and the best one is found when the sampling is in a probability inversely proportional to the frequency of co-occurrence with e. Then, the mini-batch gradient descent may be used to solve the objective function.
[0036] To find a group of peer hosts that share similar behaviors with a target host, block 404 creates clusters of hosts, taking advantage of the fact that the learned embeddings for the hosts are their latent behavior representations in that space. The following pseudo-code shows details of finding a group of peer hosts:
[0037] Input: The set of learned embeddings vectors for every host: V = {vh: Vh E H} ; The predefined number of peer groups k; The limit of iterations Maxlters;
[0038] Output: The set of peer group labels of all hosts: L = {l(h) : Vvh G V};
[0039] Initialize the set of peer group centroids C = by randomly selecting k elements from V;
[0040] For each vhm V
[0041] l vh) <- argminjE[1 k} \\ vh - Cj ||2; [0042] changed <- false;
[0043] iter <- 0;
[0044] While changed = false and iter < Maxlters
[0045] For each Cj E C {
Figure imgf000014_0001
[0047] For each vhm V {
[0048] minDist <- argminj^ fcj || vft— c;- ||2;
[0049] If minDist≠ l vh {
[0050] l vh) <- minDist;
[0051] changed <- false;
[0052] }
[0053] }
[0054] iter <- iter + 1
[0055] }
[0056] Return {/( i): Vvh G };
[0057] Since all hosts are modeled in the same space, classic unsupervised clustering can be followed to obtain peer groups. First, given a predefined integer k as the number of peer groups to be found, k hosts are selected at random as peer group centers (centroids). Then every host is assigned to its closest group centroid according to a Euclidean distance function. The centroid of all hosts in each cluster is recalculated and the values of the centroids are updated, taken as the geometric mean of the points that have that centroid's label. This process repeats until the hosts can no longer change groups. The output is a set of peer group labels corresponding to each host. [0058] Expressed another way, an expectation-maximization model can be used in block 404. An objective function can be formulated as:
Figure imgf000015_0001
where c, is the community centroid for community j, Vh is the embedding vector for the host h, and K is the number of communities. By minimizing this objective function, the best community structure of hosts can be determined.
[0059] A unified model can then be obtained by adding together the objective functions as Ou = Ox + 02 + Oc. By minimizing this equation, the community centroids C = {c\, c2, ... , cK) can be determined. There are two sets of parameters in the unified model: the embedding vectors of hosts and events (VH and VE) and the community centroids C. Thus a two-step iterative learning method may be used, where the embedded vectors and the community centroids mutually enhance one another. In the first step, the community centroids C and community assignment l(yH) are fixed and the best embedding vectors VH and VE are learned. In the second step, the predicted embedded vectors VH and VE are learned and the best values for C and l(yH) are learned.
[0060] Thus for the first step, when the community centroids C are fixed, the Oc term becomes a generalization term that makes the embedded vectors closer to their corresponding community centroid. The optimization of the reduced objective function for the first step is as follows: given two sets of data samples, DP and DN, in each iteration, two mini-batches of process-level events and network-level events are sampled as Dbp and DbN. Then a set of negative samples Db' and Db' are generated according to a noise distribution and two parameters that control the size of the negative samples, kP and kN. [0061] Without a rule of thumb for choosing an optimal noise distribution, several may be tested empirically to select the best one when sampling. One particular noise distribution that has been found to be effective is a distribution where the sampling is calibrated in the probability inversely proportional to the frequency of co-occurrence. Experiments have also shown that a suitable value for kP and kN is 5. A gradient descent can then be used over VH and VE to get the best embedded vectors.
[0062] During the second step, when VH and VE are fixed, the problem is reduced to expectation-maximization to provide the best community centroids C and community assignment l(VH). First the community assignment l(vh) is calculated for each host h as follows:
Z(Vft) = argmin ll vft - q ||2
[0063] The centroid c, is then recalculated for all hosts in each group. The values of the centroids are updated, taken as the geometric mean of the points that have the same label as the centroid. The first and second steps are repeated iteratively until all the hosts can no longer change groups.
[0064] Block 406 then uses the detected host peer groups to assess the host's anomaly level. The peer groups reflect behavioral similarity between all hosts, based on a potentially very large corpus of historical data. Reviewing the host status over a particular time period, that host should still behave similarly to its peer hosts. If not, block 406 determines that the host has a suspicious anomaly status.
[0065] One question is how to identify the severity level of a host's anomaly status. This can be addressed from two perspectives. From a host perspective, it is straightforward determination that a host's peers change relative to its past peer hosts. Thus, the following function measures the anomaly level of a host: I PE(h) n PE'(h) I
(Ό = i - I PE(h) u PE'(h) I
where PE h) is the set of peers found in the past and PE'(h) is the set of peers identified later on. From an event perspective, using the embedding vectors across all events collected from all hosts, it can be determined how different the events are between peer hosts:
Figure imgf000017_0001
where Eh is the set of events monitored from host h and cos(e, e') is the cosine similarity between the embedding vectors of event e and event e '. In particular, for each event on a host h, the distance between the event and its closest event on each peer host of h is computed, in terms of cosine similarity. The average over all peer hosts is determined and the average of all events from one host is returned as the score for this host. The two functions may be combined together as fQi) =
Figure imgf000017_0002
+ βΐιθΐ) to return the anomaly score of the host h, where and β are weighting factors that indicate the contribution of host perspective and event perspective, respectively.
[0066] Embodiments described herein may be entirely hardware, entirely software or including both hardware and software elements. In a preferred embodiment, the present invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
[0067] Embodiments may include a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. A computer- usable or computer readable medium may include any apparatus that stores, communicates, propagates, or transports the program for use by or in connection with the instruction execution system, apparatus, or device. The medium can be magnetic, optical, electronic, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. The medium may include a computer-readable storage medium such as a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk, etc.
[0068] Each computer program may be tangibly stored in a machine-readable storage media or device (e.g., program memory or magnetic disk) readable by a general or special purpose programmable computer, for configuring and controlling operation of a computer when the storage media or device is read by the computer to perform the procedures described herein. The inventive system may also be considered to be embodied in a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner to perform the functions described herein.
[0069] A data processing system suitable for storing and/or executing program code may include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code to reduce the number of times code is retrieved from bulk storage during execution. Input/output or I/O devices (including but not limited to keyboards, displays, pointing devices, etc.) may be coupled to the system either directly or through intervening I/O controllers.
[0070] Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
[0071] Referring now to FIG. 5, a host level risk analysis system 500 is shown. The system 500 includes a hardware processor 502 and memory 504. In addition, a number of functional modules are included that may, in some embodiments, be implemented as software that is stored in memory 504 and that is executed by hardware processor 502. In other embodiments, the functional modules may be implemented as one or more discrete hardware components in the form of, e.g., application specific integrated chips or field programmable gate arrays. Still other embodiments may use both hardware and software forms for different functional modules, or may split the functions of a single functional module across both hardware and software components.
[0072] Host logging module 506 collects information from agents at various hosts and stores the information in memory 504. This can include historical event information, including process-level events and network events, as well as real-time event information. Host behavior module 508 determines a host behavior profile for each host based on historical event information. Peer host module 510 then determines each host's peers based on a behavioral similarity. In particular, peer host module 510 maps each host's behavior profile into an embedding space and clusters hosts according to proximity in that space. Anomaly score module 512 then monitors incoming events over a period of time to reevaluate the host's behavior relative to that of its peers. If a host's behavior deviates significantly from the behavior of its peer hosts, anomaly score module 512 assigns a high value to the host's anomaly score. If the host's behavior conforms to the behavior of its peer hosts, anomaly score module 512 assigns a low value to the host's anomaly score. [0073] Based on the outcome of the anomaly score module 512, a security module 514 performs manual or automated security actions in response to the ranked alerts and alert patterns. In particular, the security module 514 may have rules and policies that trigger when an anomaly score for a host exceeds a threshold. Upon such triggers, the security module 514 may automatically trigger security management actions such as, e.g., shutting down devices, stopping or restricting certain types of network communication, raising alerts to system administrators, changing a security policy level, and so forth. The security module 514 may also accept instructions from a human operator to manually trigger certain security actions in view of analysis of the anomalous host.
[0074] Referring now to FIG. 6, an exemplary processing system 600 is shown which may represent the intrusion detection system 500. The processing system 600 includes at least one processor (CPU) 604 operatively coupled to other components via a system bus 602. A cache 606, a Read Only Memory (ROM) 608, a Random Access Memory (RAM) 610, an input/output (I/O) adapter 620, a sound adapter 630, a network adapter 640, a user interface adapter 650, and a display adapter 660, are operatively coupled to the system bus 602.
[0075] A first storage device 622 and a second storage device 624 are operatively coupled to system bus 602 by the I/O adapter 620. The storage devices 622 and 624 can be any of a disk storage device (e.g., a magnetic or optical disk storage device), a solid state magnetic device, and so forth. The storage devices 622 and 624 can be the same type of storage device or different types of storage devices.
[0076] A speaker 632 is operatively coupled to system bus 602 by the sound adapter 630. A transceiver 642 is operatively coupled to system bus 602 by network adapter 640. A display device 662 is operatively coupled to system bus 602 by display adapter 660.
[0077] A first user input device 652, a second user input device 654, and a third user input device 656 are operatively coupled to system bus 602 by user interface adapter 650. The user input devices 652, 654, and 656 can be any of a keyboard, a mouse, a keypad, an image capture device, a motion sensing device, a microphone, a device incorporating the functionality of at least two of the preceding devices, and so forth. Of course, other types of input devices can also be used, while maintaining the spirit of the present principles. The user input devices 652, 654, and 656 can be the same type of user input device or different types of user input devices. The user input devices 652, 654, and 656 are used to input and output information to and from system 600.
[0078] Of course, the processing system 600 may also include other elements (not shown), as readily contemplated by one of skill in the art, as well as omit certain elements. For example, various other input devices and/or output devices can be included in processing system 600, depending upon the particular implementation of the same, as readily understood by one of ordinary skill in the art. For example, various types of wireless and/or wired input and/or output devices can be used.
Moreover, additional processors, controllers, memories, and so forth, in various configurations can also be utilized as readily appreciated by one of ordinary skill in the art. These and other variations of the processing system 600 are readily contemplated by one of ordinary skill in the art given the teachings of the present principles provided herein.
[0079] The foregoing is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the Detailed Description, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the principles of the present invention and that those skilled in the art may implement various modifications without departing from the scope and spirit of the invention. Those skilled in the art could implement various other feature combinations without departing from the scope and spirit of the invention. Having thus described aspects of the invention, with the details and particularity required by the patent laws, what is claimed and desired protected by Letters Patent is set forth in the appended claims.

Claims

WHAT IS CLAIMED IS:
1. A method for determining a risk level of a host in a network, comprising:
modeling (402) a target host's behavior based on historical events recorded at the target host;
determining (404) one or more original peer hosts having behavior similar to the target host's behavior;
determining (406) an anomaly score for the target host using a processor based on how the target host's behavior changes relative to behavior of the one or more original peer hosts over time; and
performing (514) a security management action based on the anomaly score.
2. The method of claim 1, wherein modeling the target host's behavior comprises embedding the historical events in a latent space.
3. The method of claim 2, wherein determining one or more peer hosts comprises determining a distance in the latent space between the embedded target host's events and embedded events of other hosts.
4. The method of claim 3, wherein determining one or more peer hosts comprises clustering hosts based on distance in the latent space.
5. The method of claim 4, wherein clustering comprises identifying a set of initial cluster centroids and iteratively updating the centroids after assigning hosts to a closest cluster.
6. The method of claim 2, wherein embedding the historical events in a latent space comprises a negative sampling that approximates a maximized conditional probability that an event will occur at the target host.
7. The method of claim 1, wherein determining the anomaly score comprises determining one or more new peer hosts and comparing the one or more new peer hosts to the one or more original peer hosts.
8. The method of claim 1, wherein determining the anomaly score comprises determining a similarity between new events recorded at the target host and new events recorded at the one or more original peer hosts.
9. The method of claim 1, wherein performing the security action further comprises automatically performing at least one security action selected from the group consisting of shutting down devices, stopping or restricting certain types of network communication, raising alerts to system administrators, and changing a security policy level.
10. A system for determining a risk level of a host in a network, comprising:
a host behavior module (508) configured to model a target host's behavior based on historical events recorded at the target host;
a peer host module (510) configured to determine one or more original peer hosts having behavior similar to the target host's behavior; an anomaly score module (512) comprising a processor configured to determine an anomaly score for the target host based on how the target host's behavior changes relative to behavior of the one or more original peer hosts over time; and
a security module configured to perform a security management action based on the anomaly score.
11. The method of claim 10, wherein modeling the target host's behavior comprises embedding the historical events in a latent space.
12. The method of claim 11, wherein determining one or more peer hosts comprises determining a distance in the latent space between the embedded target host's events and embedded events of other hosts.
13. The method of claim 12, wherein determining one or more peer hosts comprises clustering hosts based on distance in the latent space.
14. The method of claim 13, wherein clustering comprises identifying a set of initial cluster centroids and iteratively updating the centroids after assigning hosts to a closest cluster.
15. The method of claim 11, wherein embedding the historical events in a latent space comprises a negative sampling that approximates a maximized
conditional probability that an event will occur at the target host.
16. The method of claim 10, wherein determining the anomaly score comprises determining one or more new peer hosts and comparing the one or more new peer hosts to the one or more original peer hosts.
17. The method of claim 10, wherein determining the anomaly score comprises determining a similarity between new events recorded at the target host and new events recorded at the one or more original peer hosts.
18. The method of claim 10, wherein performing the security action further comprises automatically performing at least one security action selected from the group consisting of shutting down devices, stopping or restricting certain types of network communication, raising alerts to system administrators, and changing a security policy level.
19. A method for modeling host behavior in a network, comprising:
determining (402) a first probability function for observing each of a set of process-level events at a first host based on embedding vectors for the first event and the first host;
determining (402) a second probability function for the first host issuing each of a set of network-level events connecting to a second host based on embedding vectors for the first host and the second host;
maximizing (402) the first and second probability functions to determine a set of likely process-level and network-level events for the first host; and
performing (514) a security action based on the modeled host behavior.
20. The method of claim 19, wherein the set of process-level events and the set of network level events are historical events detected at the first host.
21. The method of claim 19, wherein maximizing the first and second probability functions comprises performing a negative sampling of the host-event pairs.
22. The method of claim 21, wherein maximizing the first and second probability functions further comprises mini -batch gradient descent using the negative sampling of the host-event pairs.
23. The method of claim 21, wherein the negative sampling approximates a maximized conditional probability that an event will occur at the target host.
24. The method of claim 19, wherein performing the security action further comprises automatically performing at least one security action selected from the group consisting of shutting down devices, stopping or restricting certain types of network communication, raising alerts to system administrators, and changing a security policy level.
25. The method of claim 19, wherein all pairs of hosts and events are embedded into a common latent space.
26. The method of claim 19, wherein each host is modeled as a context of process-level events and as a context of all hosts reached by network events of said host.
27. The method of claim 19, wherein each process-level event on a host is modeled as a conditional probability of said host given the event.
28. The method of claim 19, wherein each network event is modeled as a conditional probability that one host issues a network event that connects to another host.
29. A system for modeling host behavior in a network, comprising:
a host behavior module (508) comprising a processor configured to determine a first probability function for observing each of a set of process-level events at a first host based on embedding vectors for the first event and the first host, to determine a second probability function for the first host issuing each of a set of network-level events connecting to a second host based on embedding vectors for the first host and the second host, and to maximize the first and second probability functions to determine a set of likely process-level and network-level events for the first host; and a security module (514) configured to perform a security action based on the modeled host behavior.
30. The system of claim 29, wherein the set of process-level events and the set of network level events are historical events detected at the first host.
31. The system of claim 29, the host behavior module is further configured to perform a negative sampling of the host-event pairs in maximizing the first and second probability functions.
32. The system of claim 31, wherein the host behavior module is further configured to perform a mini -batch gradient descent using the negative sampling of the host-event pairs.
33. The system of claim 31, wherein the negative sampling approximates a maximized conditional probability that an event will occur at the target host.
34. The system of claim 29, wherein the security module is further configured to perform at least one security action selected from the group consisting of shutting down devices, stopping or restricting certain types of network
communication, raising alerts to system administrators, and changing a security policy level.
35. The system of claim 29, wherein all pairs of hosts and events are embedded into a common latent space.
36. The system of claim 29, wherein each host is modeled as a context of process-level events and as a context of all hosts reached by network events of said host.
37. The system of claim 29, wherein each process-level event on a host is modeled as a conditional probability of said host given the event.
38. The system of claim 29, wherein each network event is modeled as a conditional probability that one host issues a network event that connects to another host.
39. A method for detecting host community, comprising:
modeling (402) a target host's behavior based on historical events recorded at the target host;
determining (404) one or more original peer hosts having behavior similar to the target host's behavior by determining a distance in a latent space that embeds the historical events between events of the target host and events of the one or more original peer hosts; and
performing (514) a security management action based on behavior of the target host and the determined one or more original peer hosts.
40. The method of claim 39, wherein determining one or more peer hosts comprises clustering hosts based on distance in the latent space.
41. The method of claim 40, wherein clustering comprises identifying a set of initial cluster centroids and iteratively updating the centroids after assigning hosts to a closest cluster.
42. The method of claim 39, further comprising embedding the historical events in a latent space using a negative sampling that approximates a maximized conditional probability that an event will occur at the target host.
43. The method of claim 39, wherein performing the security management action comprises determining an anomaly score based on a comparison between behavior of one or more new peer hosts and behavior of the one or more original peer hosts.
44. The method of claim 39, wherein performing the security management action comprises determining an anomaly score based on a similarity between new events recorded at the target host and new events recorded at the one or more original peer hosts.
45. The method of claim 39, wherein performing the security action further comprises automatically performing at least one security action selected from the group consisting of shutting down devices, stopping or restricting certain types of network communication, raising alerts to system administrators, and changing a security policy level.
46. The method of claim 39, wherein each host is modeled as a context of process-level events and as a context of all hosts reached by network events of said host.
47. The method of claim 39, wherein each process-level event on a host is modeled as a conditional probability of said host given the event.
48. The method of claim 39, wherein each network event is modeled as a conditional probability that one host issues a network event that connects to another host.
49. A system for detecting host community, comprising:
a host behavior module (508) configured to model a target host's behavior based on historical events recorded at the target host;
a peer host module (510) comprising a processor configured to determine one or more original peer hosts having behavior similar to the target host's behavior by determining a distance in a latent space that embeds the historical events between events of the target host and events of the one or more original peer hosts; and
a security module (514) configured to perform a security management action based on behavior of the target host and the determined one or more original peer hosts.
50. The system of claim 49, wherein the peer host module is further configured to cluster hosts based on distance in the latent space.
51. The system of claim 50, wherein the peer host module is further configured to identify a set of initial cluster centroids and iteratively update the centroids after assigning hosts to a closest cluster.
52. The system of claim 49, wherein the host behavior module is further configured to embed the historical events in a latent space using a negative sampling that approximates a maximized conditional probability that an event will occur at the target host.
53. The system of claim 49, further comprising an anomaly score module configured to determine an anomaly score for use by the security module based on a comparison between behavior of one or more new peer hosts and behavior of the one or more original peer hosts.
54. The system of claim 49, further comprising an anomaly score module configured to determine an anomaly score for use by the security module based on a similarity between new events recorded at the target host and new events recorded at the one or more original peer hosts.
55. The system of claim 49, wherein the security module is further configured to automatically perform at least one security action selected from the group consisting of shutting down devices, stopping or restricting certain types of network communication, raising alerts to system administrators, and changing a security policy level.
56. The system of claim 49, wherein the host behavior module is further configured to model each host as a context of process-level events and as a context of all hosts reached by network events of said host.
57. The system of claim 49, wherein the host behavior module is further configured to model each process-level event on a host as a conditional probability of said host given the event.
58. The system of claim 49, wherein the host behavior module is further configured to model each network event as a conditional probability that one host issues a network event that connects to another host.
PCT/US2018/019829 2017-02-27 2018-02-27 Peer-based abnormal host detection for enterprise security systems WO2018217259A2 (en)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US201762463976P 2017-02-27 2017-02-27
US62/463,976 2017-02-27
US15/902,432 US10476754B2 (en) 2015-04-16 2018-02-22 Behavior-based community detection in enterprise information networks
US15/902,369 2018-02-22
US15/902,432 2018-02-22
US15/902,369 US10476753B2 (en) 2015-04-16 2018-02-22 Behavior-based host modeling
US15/902,318 US10367842B2 (en) 2015-04-16 2018-02-22 Peer-based abnormal host detection for enterprise security systems
US15/902,318 2018-02-22

Publications (2)

Publication Number Publication Date
WO2018217259A2 true WO2018217259A2 (en) 2018-11-29
WO2018217259A3 WO2018217259A3 (en) 2019-02-28

Family

ID=64396834

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/019829 WO2018217259A2 (en) 2017-02-27 2018-02-27 Peer-based abnormal host detection for enterprise security systems

Country Status (1)

Country Link
WO (1) WO2018217259A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021202117A1 (en) * 2020-03-31 2021-10-07 Forescout Technologies, Inc. Clustering enhanced analysis

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8544058B2 (en) * 2005-12-29 2013-09-24 Nextlabs, Inc. Techniques of transforming policies to enforce control in an information management system
US8424072B2 (en) * 2010-03-09 2013-04-16 Microsoft Corporation Behavior-based security system
US8973133B1 (en) * 2012-12-19 2015-03-03 Symantec Corporation Systems and methods for detecting abnormal behavior of networked devices
US9355007B1 (en) * 2013-07-15 2016-05-31 Amazon Technologies, Inc. Identifying abnormal hosts using cluster processing
US9516039B1 (en) * 2013-11-12 2016-12-06 EMC IP Holding Company LLC Behavioral detection of suspicious host activities in an enterprise

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021202117A1 (en) * 2020-03-31 2021-10-07 Forescout Technologies, Inc. Clustering enhanced analysis
US11601445B2 (en) 2020-03-31 2023-03-07 Forescout Technologies, Inc. Clustering enhanced analysis
US11902304B2 (en) 2020-03-31 2024-02-13 Forescout Technologies, Inc. Clustering enhanced analysis

Also Published As

Publication number Publication date
WO2018217259A3 (en) 2019-02-28

Similar Documents

Publication Publication Date Title
US10476753B2 (en) Behavior-based host modeling
US11973774B2 (en) Multi-stage anomaly detection for process chains in multi-host environments
US10367842B2 (en) Peer-based abnormal host detection for enterprise security systems
US11463472B2 (en) Unknown malicious program behavior detection using a graph neural network
JP6545819B2 (en) Integrated discovery of communities and roles in corporate networks
US20210273953A1 (en) ENDPOINT AGENT CLIENT SENSORS (cSENSORS) AND ASSOCIATED INFRASTRUCTURES FOR EXTENDING NETWORK VISIBILITY IN AN ARTIFICIAL INTELLIGENCE (AI) THREAT DEFENSE ENVIRONMENT
US10476749B2 (en) Graph-based fusing of heterogeneous alerts
US20190104136A1 (en) Apparatus, system and method for identifying and mitigating malicious network threats
US20230336581A1 (en) Intelligent prioritization of assessment and remediation of common vulnerabilities and exposures for network nodes
Singh et al. An edge based hybrid intrusion detection framework for mobile edge computing
EP4367840A1 (en) Intelligent prioritization of assessment and remediation of common vulnerabilities and exposures for network nodes
US10476754B2 (en) Behavior-based community detection in enterprise information networks
US10630709B2 (en) Assessing detectability of malware related traffic
US12003383B2 (en) Fingerprinting assisted by similarity-based semantic clustering
CN110071829B (en) DNS tunnel detection method and device and computer readable storage medium
WO2017019391A1 (en) Graph-based intrusion detection using process traces
Garg et al. HyClass: Hybrid classification model for anomaly detection in cloud environment
WO2021236661A1 (en) Endpoint client sensors for extending network visibility
Al-Utaibi et al. Intrusion detection taxonomy and data preprocessing mechanisms
WO2023239812A1 (en) Endpoint agents and scalable cloud architecture for low latency classification
US10291483B2 (en) Entity embedding-based anomaly detection for heterogeneous categorical events
JP6616045B2 (en) Graph-based combination of heterogeneous alerts
WO2018217259A2 (en) Peer-based abnormal host detection for enterprise security systems
KR102609592B1 (en) Method and apparatus for detecting abnormal behavior of IoT system
WO2023163842A1 (en) Thumbprinting security incidents via graph embeddings

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18805488

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18805488

Country of ref document: EP

Kind code of ref document: A2