Patents by Inventor Andreea Anghel
Andreea Anghel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11803779Abstract: In an approach for constructing an ensemble model from a set of base learners, a processor performs a plurality of boosting iterations, where: at each boosting iteration of the plurality of boosting iterations, a base learner is selected at random from a set of base learners, according to a sampling probability distribution of the set of base learners, and trained according to a training dataset; and the sampling probability distribution is altered: (i) after selecting a first base learner at a first boosting iteration of the plurality of boosting iterations and (ii) prior to selecting a second base learner at a final boosting iteration of the plurality of boosting iterations. A processor constructs an ensemble model based on base learners selected and trained during the plurality of boosting iterations.Type: GrantFiled: February 25, 2020Date of Patent: October 31, 2023Assignee: International Business Machines CorporationInventors: Thomas Parnell, Andreea Anghel, Nikolas Ioannou, Nikolaos Papandreou, Celestine Mendler-Duenner, Dimitrios Sarigiannis, Charalampos Pozidis
-
Publication number: 20230251907Abstract: The invention is notably directed to a computer-implemented method, which aims at jointly identifying an optimal source of computerized resources and optimizing a configuration of the computerized resources. The method comprises configuring a Best-Arm Identification algorithm, in order to (i) associate arms of the algorithm with respective sources of computerized resources and (ii) connect the arms to one or more optimizers. Each of the optimizers is designed to optimize a configuration of such computerized resources. Next, the method iteratively executes the Best-Arm Identification algorithm to progressively eliminate the sources, with a view to eventually identifying one of the sources as an optimal source with an optimized configuration. Several iterations are accordingly performed. During each iteration, each of the arms is pulled and the rewards earned by pulling the arms are computed. Pulling each arm causes to optimize a configuration of computerized resources of a respectively associated source.Type: ApplicationFiled: February 7, 2022Publication date: August 10, 2023Inventors: Malgorzata Lazuka, Thomas Parnell, Andreea Anghel, Charalampos Pozidis
-
Patent number: 11663067Abstract: Embodiments of the invention include a computer-implemented method for detecting anomalies in non-stationary data in a network of computing entities. The method collects non-stationary data in the network and classifies the non-stationary data according to a non-Markovian, stateful classification, based on an inference model. Anomalies can then be detected, based on the classified data. The non-Markovian, stateful process allows anomaly detection even when no a priori knowledge of anomaly signatures or malicious entities exists. Anomalies can be detected in real time (e.g., at speeds of 10-100 Gbps) and the network data variability can be addressed by implementing a detection pipeline to adapt to changes in traffic behavior through online learning and retain memory of past behaviors. A two-stage scheme can be relied upon, which involves a supervised model coupled with an unsupervised model.Type: GrantFiled: December 15, 2017Date of Patent: May 30, 2023Assignee: International Business Machines CorporationInventors: Andreea Anghel, Mitch Gusat, Georgios Kathareios
-
Patent number: 11621078Abstract: The invention is notably directed to a computer-implemented method for normalizing medical images, e.g., whole slide images. This method includes steps performed for each image of a first subset of images of a dataset. Actual quantities are estimated for each image, including actual stain vectors and, possibly, robust maximum stain concentrations (typically hematoxylin and eosin stain vectors and concentrations). The actual quantities estimated are assessed by comparing them to reference data based on reference quantities estimated for one or more images of a second subset of images of the dataset, where the second subset of images differ from the first subset of images. The reference quantities include reference stain vectors. For each image, either the actual quantities or the reference quantities for the dataset are selected as effective quantities, based on an outcome of the previous assessment of the actual quantities. Each image is then normalized.Type: GrantFiled: March 18, 2020Date of Patent: April 4, 2023Assignee: International Business Machines CorporationInventors: Nikolaos Papandreou, Sonali Andani, Andreea Anghel, Milos Stanisavljevic
-
Publication number: 20220198281Abstract: An approach of accelerating inferences based on decision trees based on accessing one or more decision trees, wherein each decision tree of the decision trees accessed comprises decision tree nodes, including nodes grouped into one or more supersets of nodes designed for joint execution. For each decision tree of the decision trees accessed, the nodes are executed to obtain an outcome for the one or more decision trees, respectively. For each superset of the one or more supersets of said each decision tree, the nodes of each superset are jointly executed by: loading attributes of the nodes of each superset in a respective cache line of the cache memory processing said attributes from the respective cache line until an inference result is returned based on the one or more outcomes.Type: ApplicationFiled: December 18, 2020Publication date: June 23, 2022Inventors: Jan Van Lunteren, Nikolas Ioannou, Nikolaos Papandreou, Thomas Parnell, Andreea Anghel, Charalampos Pozidis
-
Publication number: 20220180211Abstract: According to one embodiment, a method, computer system, and computer program product for training a cognitive model that involves one or more decision trees as base learners is provided. The present invention may include constructing, by a tree building algorithm, the one or more decision trees, wherein the constructing further comprises associating one or more training examples with one or more leaf nodes of the one or more decision trees and iteratively running a breadth-first search tree builder on one or more of the decision trees to perform one or more tree building operations; and training the cognitive model based on the one or more decision trees.Type: ApplicationFiled: December 4, 2020Publication date: June 9, 2022Inventors: Nikolas Ioannou, Thomas Parnell, Andreea Anghel, Nikolaos Papandreou, Charalampos Pozidis
-
Patent number: 11347970Abstract: Optimizing a network comprising a core computing system (CCS) and a set of edge computing devices (ECDs), wherein each of the ECDs locally performs computations based on a trained machine learning (ML) model. A plurality of ML models are continually trained at the CCS, concurrently, based on data collected from the ECDs. One or more states of the network and/or components thereof are monitored. The monitored states are relied upon to decide (when) to change a trained ML model as currently used by any of the ECDs to perform said computations. It may be decided to change the model used by a given one of the ECDs to perform ML-based computations. One of the models as trained at the CCS is selected (based on the monitored states) and corresponding parameters are sent to this ECD. The latter can resume computations according to a trained model.Type: GrantFiled: April 30, 2018Date of Patent: May 31, 2022Assignee: International Business Machines CorporationInventors: Andreea Anghel, Georgios Kathareios, Mitch Gusat
-
Patent number: 11238295Abstract: Processing a digital image in a distributed computing environment comprising a communications network interconnecting two or more computing nodes. A segmentation of the digital image into two or more image segments is determined. For each of the image segments, a number of non-background pixels comprised by the image segment is determined. An assignment of each of the image segments to one of the computing nodes is determined. The determination of the assignment may include balancing, based on the number of non-background pixels determined for each of the image segments, the workload of the assigned computing nodes responsive to processing the image segments. Each of the assigned computing nodes may be caused to process the image segments assigned to the computing node.Type: GrantFiled: March 19, 2019Date of Patent: February 1, 2022Assignee: International Business Machines CorporationInventors: Nikolaos Papandreou, Andreea Anghel, Milos Stanisavljevic, Charalampos Pozidis
-
Publication number: 20210334709Abstract: The present invention is notably directed to a computer-implemented method of training a cognitive model. The cognitive model includes decision trees as base learners. The method is performed using processing means to which a given cache memory is connected, so as to train the cognitive model based on training examples of a training dataset. The cognitive model is trained by running a hybrid tree building algorithm, so as to construct the decision trees and thereby associate the training examples to leaf nodes of the constructed decision trees, respectively. The hybrid tree building algorithm involves a first routine and a second routine. Each routine is designed to access the cache memory upon execution. The first routine involves a breadth-first search tree builder, while the second routine involves a depth-first search tree builder.Type: ApplicationFiled: April 27, 2020Publication date: October 28, 2021Inventors: Nikolas Ioannou, Andreea Anghel, Thomas Parnell, Nikolaos Papandreou, Charalampos Pozidis
-
Publication number: 20210295994Abstract: The invention is notably directed to a computer-implemented method for normalizing medical images, e.g., whole slide images. This method includes steps performed for each image of a first subset of images of a dataset. Actual quantities are estimated for each image, including actual stain vectors and, possibly, robust maximum stain concentrations (typically hematoxylin and eosin stain vectors and concentrations). The actual quantities estimated are assessed by comparing them to reference data based on reference quantities estimated for one or more images of a second subset of images of the dataset, where the second subset of images differ from the first subset of images. The reference quantities include reference stain vectors. For each image, either the actual quantities or the reference quantities for the dataset are selected as effective quantities, based on an outcome of the previous assessment of the actual quantities. Each image is then normalized.Type: ApplicationFiled: March 18, 2020Publication date: September 23, 2021Inventors: Nikolaos Papandreou, Sonali Andani, Andreea Anghel, Milos Stanisavljevic
-
Publication number: 20210264320Abstract: In an approach for constructing an ensemble model from a set of base learners, a processor performs a plurality of boosting iterations, where: at each boosting iteration of the plurality of boosting iterations, a base learner is selected at random from a set of base learners, according to a sampling probability distribution of the set of base learners, and trained according to a training dataset; and the sampling probability distribution is altered: (i) after selecting a first base learner at a first boosting iteration of the plurality of boosting iterations and (ii) prior to selecting a second base learner at a final boosting iteration of the plurality of boosting iterations. A processor constructs an ensemble model based on base learners selected and trained during the plurality of boosting iterations.Type: ApplicationFiled: February 25, 2020Publication date: August 26, 2021Inventors: Thomas Parnell, Andreea Anghel, Nikolas loannou, Nikolaos Papandreou, Celestine Mendler-Duenner, Dimitrios Sarigiannis, Charalampos Pozidis
-
Patent number: 10924504Abstract: Distinct sets of non-stationary data seen on a switch in data communication with one or more of computerized units in a network, are mirrored via two switch ports, which include a first port and a second port. A dual analysis is performed while mirroring said distinct sets of data. First data obtained from data mirrored at the first port are analyzed (e.g., using a trained machine learning model) and, based on the first data analyzed, the switch is reconfigured for the second port to mirror second data, which are selected from non-stationary data as seen on the switch (e.g., data received and/or transmitted by the switch). The second data mirrored at the second port is analyzed (e.g., using a different analysis scheme, suited for the selected data).Type: GrantFiled: July 6, 2018Date of Patent: February 16, 2021Assignee: International Business Machines CorporationInventors: Mircea R. Gusat, Andreea Anghel, Georgios Kathareios, Akos Mate
-
Patent number: 10902348Abstract: Embodiments of the invention include a computer-implemented method of processor branch prediction. This method aims at training a machine-learning model of processor branch behavior while a processing unit executes computer instructions. Such instructions include branch instructions, load instructions and store instructions. The load instructions and the store instructions cause a control unit of the processing unit to load data from a memory into processor registers and store data from the processor registers to the memory, respectively. Basically, the training of the model involves, for each of N branch instructions (N>2) encountered whilst the processing unit executes said branch instructions: identifying a next branch instruction; and feeding the machine-learning model with carefully chosen inputs.Type: GrantFiled: May 19, 2017Date of Patent: January 26, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Peter Altevogt, Andreea Anghel, Gero Dittmann, Cedric Lichtenau, Thomas Pflueger
-
Patent number: 10896386Abstract: Embodiments of the invention include a computer-implemented method of processor branch prediction. This method aims at training a machine-learning model of processor branch behavior while a processing unit executes computer instructions. Such instructions include branch instructions, load instructions and store instructions. The load instructions and the store instructions cause a control unit of the processing unit to load data from a memory into processor registers and store data from the processor registers to the memory, respectively. Basically, the training of the model involves, for each of N branch instructions (N>2) encountered whilst the processing unit executes said branch instructions: identifying a next branch instruction; and feeding the machine-learning model with carefully chosen inputs.Type: GrantFiled: November 3, 2017Date of Patent: January 19, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Peter Altevogt, Andreea Anghel, Gero Dittmann, Cedric Lichtenau, Thomas Pflueger
-
Publication number: 20200302203Abstract: Processing a digital image in a distributed computing environment comprising a communications network interconnecting two or more computing nodes. A segmentation of the digital image into two or more image segments is determined. For each of the image segments, a number of non-background pixels comprised by the image segment is determined. An assignment of each of the image segments to one of the computing nodes is determined. The determination of the assignment may include balancing, based on the number of non-background pixels determined for each of the image segments, the workload of the assigned computing nodes responsive to processing the image segments. Each of the assigned computing nodes may be caused to process the image segments assigned to the computing node.Type: ApplicationFiled: March 19, 2019Publication date: September 24, 2020Inventors: Nikolaos Papandreou, Andreea Anghel, Milos Stanisavljevic, Charalampos Pozidis
-
Patent number: 10754773Abstract: A method for dynamically selecting a size of a memory access may be provided. The method comprises accessing blocks having a variable number of consecutive cache lines, maintaining a vector with entries of past utilizations for each block size, and adapting said block size before a next access to the blocks.Type: GrantFiled: October 11, 2017Date of Patent: August 25, 2020Assignee: International Business Machines CorporationInventors: Andreea Anghel, Cedric Lichtenau, Gero Dittmann, Peter Altevogt, Thomas Pflueger
-
Publication number: 20200014712Abstract: Distinct sets of non-stationary data seen on a switch in data communication with one or more of computerized units in a network, are mirrored via two switch ports, which include a first port and a second port. A dual analysis is performed while mirroring said distinct sets of data. First data obtained from data mirrored at the first port are analyzed (e.g., using a trained machine learning model) and, based on the first data analyzed, the switch is reconfigured for the second port to mirror second data, which are selected from non-stationary data as seen on the switch (e.g., data received and/or transmitted by the switch). The second data mirrored at the second port is analyzed (e.g., using a different analysis scheme, suited for the selected data).Type: ApplicationFiled: July 6, 2018Publication date: January 9, 2020Inventors: Mitch Gusat, Andreea Anghel, Georgios Kathareios, Akos Mate
-
Publication number: 20190332895Abstract: Optimizing a network comprising a core computing system (CCS) and a set of edge computing devices (ECDs), wherein each of the ECDs locally performs computations based on a trained machine learning (ML) model . A plurality of ML models are continually trained at the CCS, concurrently, based on data collected from the ECDs. One or more states of the network and/or components thereof are monitored. The monitored states are relied upon to decide (when) to change a trained ML model as currently used by any of the ECDs to perform said computations. It may be decided to change the model used by a given one of the ECDs to perform ML-based computations. One of the models as trained at the CCS is selected (based on the monitored states) and corresponding parameters are sent to this ECD. The latter can resume computations according to a trained model.Type: ApplicationFiled: April 30, 2018Publication date: October 31, 2019Inventors: Andreea Anghel, Georgios Kathareios, Mitch Gusat
-
Publication number: 20190332525Abstract: A method of prefetching data is provided including monitoring sequences of memory addresses of data being accessed by a system, whereby sequences of m+1 memory addresses each are continually identified; and for each identified sequence: converting, upon identifying said each sequence, memory addresses of said each sequence into m relative addresses, whereby each of the m relative addresses is relative to a previous memory address in said each sequence, so as to obtain an auxiliary sequence of m relative addresses; upon converting said memory addresses, feeding said auxiliary sequence of m relative addresses as input to a trained machine learning model for it to predict p relative addresses of next memory accesses by the system, where p?1; and prefetching data at memory locations associated with one or more memory addresses that respectively correspond to one or more of the p relative addresses predicted.Type: ApplicationFiled: April 27, 2018Publication date: October 31, 2019Inventors: Andreea Anghel, Peter Altevogt, Gero Dittmann, Cedric Lichtenau
-
Patent number: 10437718Abstract: A method of prefetching data is provided including monitoring sequences of memory addresses of data being accessed by a system, whereby sequences of m+1 memory addresses each are continually identified; and for each identified sequence: converting, upon identifying said each sequence, memory addresses of said each sequence into m relative addresses, whereby each of the m relative addresses is relative to a previous memory address in said each sequence, so as to obtain an auxiliary sequence of m relative addresses; upon converting said memory addresses, feeding said auxiliary sequence of m relative addresses as input to a trained machine learning model for it to predict p relative addresses of next memory accesses by the system, where p?1; and prefetching data at memory locations associated with one or more memory addresses that respectively correspond to one or more of the p relative addresses predicted.Type: GrantFiled: April 27, 2018Date of Patent: October 8, 2019Assignee: International Business Machines CorporationInventors: Andreea Anghel, Peter Altevogt, Gero Dittmann, Cedric Lichtenau