Patents Examined by Imad Kassim
-
Patent number: 11972344Abstract: A method, system, and computer program product, including generating, using a linear probe, confidence scores through flattened intermediate representations and theoretically-justified weighting of samples during a training of the simple model using the confidence scores of the intermediate representations.Type: GrantFiled: November 28, 2018Date of Patent: April 30, 2024Assignee: International Business Machines CorporationInventors: Amit Dhurandhar, Karthikeyan Shanmugam, Ronny Luss, Peder Andreas Olsen
-
Patent number: 11954568Abstract: The disclosed technology relates identifying causes of an observed outcome. A system is configured to receive an indication of a user experience problem, wherein the user experience problem is associated with observed operations data including an observed outcome. The system generates, based on the observed operations data, a predicted outcome according to a model, determines that the observed outcome is within range of the predicted outcome, and identifies a set of candidate causes of the user experience problem when the observed outcome is within range of the predicted outcome.Type: GrantFiled: September 21, 2021Date of Patent: April 9, 2024Assignee: Cisco Technology, Inc.Inventors: Harish Doddala, Tian Bu, Tej Redkar
-
Patent number: 11948083Abstract: An exemplary embodiment provides an autoencoder which is explainable. An exemplary autoencoder may explain the degree to which each feature of the input attributed to the output of the system, which may be a compressed data representation. An exemplary embodiment may be used for classification, such as anomaly detection, as well as other scenarios where an autoencoder is input to another machine learning system or when an autoencoder is a component in an end-to-end deep learning architecture. An exemplary embodiment provides an explainable generative adversarial network that adds explainable generation, simulation and discrimination capabilities. The underlying architecture of an exemplary embodiment may be based on an explainable or interpretable neural network, allowing the underlying architecture to be a fully explainable white-box machine learning system.Type: GrantFiled: November 16, 2021Date of Patent: April 2, 2024Assignee: UMNAI LimitedInventors: Angelo Dalli, Mauro Pirrone, Matthew Grech
-
Patent number: 11928584Abstract: Methods, systems, and devices for distributed hyperparameter tuning and load balancing are described. A device (e.g., an application server) may generate a first set of combinations of hyperparameter values associated with training a mathematical model. The mathematical model may include a machine learning model, an optimization model, or any combination. The device may identify a subset of combinations from the first set of combinations that are associated with a computational runtime that exceeds a first threshold and may distribute the subset of combinations across a set of machines. The device may then test each of the first set of combinations in a parallel processing operation to generate a first set of validation error values and may test a second set of combinations of hyperparameter values using an objective function that is based on the first set of validation error values.Type: GrantFiled: January 31, 2020Date of Patent: March 12, 2024Assignee: Salesforce, Inc.Inventors: Bradford William Powley, Noah Burbank, Rowan Cassius
-
Patent number: 11915159Abstract: Systems, methods, and computer program products for estimating a Bayesian hierarchical regression model using parallelized and distributed Gibbs sampling are described. The techniques can be implemented to solve use cases where there is a response variable, e.g., number of store visits or web page visits, which is a variable of interest, and multiple explanatory variables, e.g., locations, temperatures, or prices, that may predict the response variable. The disclosed techniques build a model that explains and quantifies effects of the explanatory variables on the response variable on a distributed system. For instance, the disclosed techniques can build a model which has the capability to estimate that an X-degree increase in temperature at a certain time of year predicts a Y-percent increase in store visits. This estimation process is performed in parallel on multiple nodes of the distributed system.Type: GrantFiled: May 1, 2017Date of Patent: February 27, 2024Assignee: Pivotal Software, Inc.Inventor: Woo Jae Jung
-
Patent number: 11886987Abstract: A multiply-accumulate method and architecture are disclosed. The architecture includes a plurality of networks of non-volatile memory elements arranged in tiled columns. Logic digitally modulates the equivalent conductance of individual networks among the plurality of networks to map the equivalent conductance of each individual network to a single weight within the neural network. A first partial selection of weights within the neural network is mapped into the equivalent conductances of the networks in the columns to enable the computation of multiply-and-accumulate operations by mixed-signal computation. The logic updates the mappings to select a second partial selection of weights to compute additional multiply-and-accumulate operations and repeats the mapping and computation operations until all computations for the neural network are completed.Type: GrantFiled: June 25, 2019Date of Patent: January 30, 2024Assignee: Arm LimitedInventors: Shidhartha Das, Matthew Mattina, Glen Arnold Rosendale, Fernando Garcia Redondo
-
Patent number: 11880767Abstract: Embodiments of the present invention provide the use of a conditional Generative Adversarial Network (GAN) to simultaneously correct and downscale (super-resolve) global ensemble weather or climate forecasts. Specifically, a generator deep neural network (G-DNN) in the cGAN comprises a corrector DNN (C-DNN) followed by a super-resolver DNN (SR-DNN). The C-DNN bias-corrects coarse, global meteorological forecasts, taking into account other relevant contextual meteorological fields. The SR-DNN downscales bias-corrected C-DNN output into G-DNN output at a higher target spatial resolution. The GAN is trained in three stages: C-DNN training, SR-DNN training, and overall GAN training, each using separate loss functions. Embodiments of the present invention significantly outperform an interpolation baseline, and approach the performance of operational regional high-resolution forecast models across an array of established probabilistic metrics.Type: GrantFiled: February 21, 2022Date of Patent: January 23, 2024Assignee: ClimateAI, Inc.Inventors: Ilan Shaun Posel Price, Stephan Rasp
-
Patent number: 11810340Abstract: A system includes a determination component that determines output for successively larger neural networks of a set; and a consensus component that determines consensus between a first neural network and a second neural network of the set. A linear chain of increasingly complex neural networks trained on progressively larger inputs is utilized (e.g., increasingly complex neural networks is generally representative of increased accuracy). Outputs of progressively networks are computed until a consensus point is reached—where two or more successive large networks yield a same inference output. At such point of consensus the larger neural network of the set reaching consensus can be deemed appropriately sized (or of sufficient complexity) for a classification task at hand.Type: GrantFiled: November 29, 2017Date of Patent: November 7, 2023Assignee: International Business Machines CorporationInventors: Pradip Bose, Alper Buyuktosunoglu, Schuyler Eldridge, Karthik V. Swaminathan, Swagath Venkataramani
-
Patent number: 11783173Abstract: A processing unit can train a model as a joint multi-domain recurrent neural network (JRNN), such as a bi-directional recurrent neural network (bRNN) and/or a recurrent neural network with long-short term memory (RNN-LSTM) for spoken language understanding (SLU). The processing unit can use the trained model to, e.g., jointly model slot filling, intent determination, and domain classification. The joint multi-domain model described herein can estimate a complete semantic frame per query, and the joint multi-domain model enables multi-task deep learning leveraging the data from multiple domains. The joint multi-domain recurrent neural network (JRNN) can leverage semantic intents (such as, finding or identifying, e.g., a domain specific goal) and slots (such as, dates, times, locations, subjects, etc.) across multiple domains.Type: GrantFiled: August 4, 2016Date of Patent: October 10, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Dilek Z Hakkani-Tur, Asli Celikyilmaz, Yun-Nung Chen, Li Deng, Jianfeng Gao, Gokhan Tur, Ye-Yi Wang
-
Patent number: 11783046Abstract: Anomaly detection in computing environments is disclosed herein. An example method includes receiving an unstructured input stream of data instances from the computing environment, the unstructured input stream being time stamped; categorizing the data instances of the unstructured input stream of data instances, the data instances comprising at least one principle value and a set of categorical attributes determined through machine learning; generating anomaly scores for each of the data instances collected over a period of time; and detecting a change in the categorical attribute that is indicative of an anomaly.Type: GrantFiled: December 27, 2017Date of Patent: October 10, 2023Assignee: Elasticsearch B.V.Inventors: Stephen Dodson, Thomas Veasey, David Mark Roberts
-
Patent number: 11763150Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for balanced-weight sparse convolution processing. An exemplary method comprises: obtaining an input tensor and a plurality of filters at a layer within a neural network; segmenting the input tensor into a plurality of sub-tensors; dividing a channel dimension of each of the plurality of filters into a plurality of channel groups; pruning each of the plurality of filters so that each of the plurality of channel groups of each filter comprises a same number of non-zero weights; segmenting each of the plurality of filters into a plurality of the sub-filters according to the plurality of channel groups; and assigning the plurality of sub-tensors and the plurality of sub-filters to a plurality of processors for parallel convolution processing.Type: GrantFiled: August 2, 2021Date of Patent: September 19, 2023Assignee: Moffett International Co., LimitedInventors: Zhibin Xiao, Enxu Yan, Wei Wang, Yong Lu
-
Patent number: 11741354Abstract: A system includes a processor for performing one or more autonomous driving or assisted driving tasks based on a neural network. The neural network includes a base portion for performing feature extraction simultaneously for a plurality of tasks on a single set of input data. The neural network includes a plurality of subtask portions for performing the plurality of tasks based on feature extraction output from the base portion. Each of the plurality of subtask portions comprise nodes or layers of a neutral network trained on different sets of training data, and the base portion comprises nodes or layers of a neural network trained using each of the different sets of training data constrained by elastic weight consolidation to limit the base portion from forgetting a previously learned task.Type: GrantFiled: August 25, 2017Date of Patent: August 29, 2023Assignee: Ford Global Technologies, LLCInventors: Guy Hotson, Vidya Nariyambut Murali, Gintaras Vincent Puskorius
-
Patent number: 11741352Abstract: A resistive processing unit (RPU) that includes a pair of transistors connected in series providing an update function for a weight of a training methodology to the RPU, and a read transistor for reading the weight of the training methodology. In some embodiments, the resistive processing unit (RPU) further includes a capacitor connecting a gate of the read transistor to the air of transistors providing the update function for the resistive processing unit (RPU). The capacitor stores said weight of training methodology for the RPU.Type: GrantFiled: August 22, 2016Date of Patent: August 29, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Tayfun Gokmen, Seyoung Kim, Dennis M. Newns, Yurii A. Vlasov
-
Patent number: 11734591Abstract: Certain aspects involve optimizing neural networks or other models for assessing risks and generating explanatory data regarding predictor variables used in the model. In one example, a system identifies predictor variables. The system generates a neural network for determining a relationship between each predictor variable and a risk indicator. The system performs a factor analysis on the predictor variables to determine common factors. The system iteratively adjusts the neural network so that (i) a monotonic relationship exists between each common factor and the risk indicator and (ii) a respective variance inflation factor for each common factor is sufficiently low. Each variance inflation factor indicates multicollinearity among the common factors. The adjusted neural network can be used to generate explanatory indicating relationships between (i) changes in the risk indicator and (ii) changes in at least some common factors.Type: GrantFiled: December 2, 2019Date of Patent: August 22, 2023Assignee: EQUIFAX INC.Inventors: Matthew Turner, Michael McBurnett, Yafei Zhang
-
Patent number: 11714999Abstract: Neuromorphic methods, systems and devices are provided. The embodiment may include a neuromorphic device which may comprise a crossbar array structure and an analog circuit. The crossbar array structure may include N input lines and M output lines interconnected at junctions via N×M electronic devices, which, in preferred embodiments, include, each, a memristive device. The input lines may comprise N1 first input lines and N2 second input lines. The first input lines may be connected to the M output lines via N1×M first devices of said electronic devices. Similarly, the second input lines may be connected to the M output lines via N2×M second devices of said electronic devices. The analog circuit may be configured to program the electronic devices so as for the first devices to store synaptic weights and the second devices to store neuronal states.Type: GrantFiled: November 15, 2019Date of Patent: August 1, 2023Assignee: International Business Machines CorporationInventors: Thomas Bohnstingl, Angeliki Pantazi, Evangelos Stavros Eleftheriou
-
Patent number: 11698930Abstract: Various embodiments are generally directed to techniques for determining artificial neural network topologies, such as by utilizing probabilistic graphical models, for instance. Some embodiments are particularly related to determining neural network topologies by bootstrapping a graph, such as a probabilistic graphical model, into a multi-graphical model, or graphical model tree. Various embodiments may include logic to determine a collection of sample sets from a dataset. In various such embodiments, each sample set may be drawn randomly for the dataset with replacement between drawings. In some embodiments, logic may partition a graph into multiple subgraph sets based on each of the sample sets. In several embodiments, the multiple subgraph sets may be scored, such as with Bayesian statistics, and selected amongst as part of determining a topology for a neural network.Type: GrantFiled: June 21, 2018Date of Patent: July 11, 2023Assignee: INTEL CORPORATIONInventors: Yaniv Gurwicz, Raanan Yonatan Yehezkel Rohekar, Shami Nisimov, Guy Koren, Gal Novik
-
Patent number: 11669741Abstract: Disclosed is a method for meta-knowledge fine-tuning and platform based on domain-invariant features. According to the method, highly transferable common knowledge, i.e., domain-invariant features, in different data sets of the same kind of tasks is learnt, the common domain features in different domains corresponding to different data sets of the same kind of tasks learnt in the network set are fine-tuned to be quickly adapted to any different domains. According to the present application, the parameter initialization ability and generalization ability of the universal language model of the same kind of tasks are improved, and finally a common compression framework of the universal language model of the same kind of downstream tasks is obtained through fine tuning. In the meta-knowledge fine-tuning network, a loss function of the domain-invariant features is designed in the present application, and domain-independent universal knowledge is learn.Type: GrantFiled: February 18, 2022Date of Patent: June 6, 2023Assignee: ZHEJIANG LABInventors: Hongsheng Wang, Haijun Shan, Shengjian Hu
-
Patent number: 11636348Abstract: At a centralized model trainer, one or more neural network based models are trained using an input data set. At least a first set of parameters of a model is transmitted to a model deployment destination. Using a second input data set, one or more adaptive parameters for the model are determined at the model deployment destination. Using the adaptive parameters, one or more inferences are generated at the model deployment destination.Type: GrantFiled: November 24, 2021Date of Patent: April 25, 2023Assignee: Apple Inc.Inventors: Yichuan Tang, Nitish Srivastava, Ruslan Salakhutdinov
-
Patent number: 11621969Abstract: Clustering and outlier detection in anomaly and causation detection for computing environments is disclosed. An example method includes receiving an input stream having data instances, each of the data instances having multi-dimensional attribute sets, identifying any of outliers and singularities in the data instances, extracting the outliers and singularities, grouping two or more of the data instances into one or more groups based on correspondence between the multi-dimensional attribute sets and a clustering type, and displaying the grouped data instances that are not extracted in a plurality of clustering maps on an interactive graphical user interface, wherein each of the plurality of clustering maps is based on a unique clustering type.Type: GrantFiled: December 28, 2017Date of Patent: April 4, 2023Assignee: ELASTICSEARCH B.V.Inventors: Stephen Dodson, Thomas Veasey
-
Patent number: 11593636Abstract: A machine learning system and method. The machine learning system includes at least one computation circuit that performs a weighted summation of incoming signals and provides a resulting signal. The weighted summation is carried out at least in part by a magnetic element in which weights are adjusted based on changes in effective magnetic susceptibility of the magnetic element.Type: GrantFiled: January 3, 2019Date of Patent: February 28, 2023Assignee: SEAGATE TECHNOLOGY LLCInventors: Kirill A. Rivkin, Javier Guzman, Mourad Benakli