Patents Examined by Li B. Zhen
-
Patent number: 12039413Abstract: The present design is directed to a series of interconnected compute servers including a supervisory hardware node and a plurality of knowledge hardware nodes, wherein the series of interconnected compute servers are configured to categorize and scale performance of multiple disjoint algorithms across a seemingly infinite actor population, wherein the series of interconnected compute servers are configured to normalize data using a common taxonomy, distribute normalized data relatively evenly across the plurality of knowledge hardware nodes, supervise algorithm execution across knowledge hardware nodes, and collate and present results of analysis of the seemingly infinite actor population.Type: GrantFiled: September 18, 2017Date of Patent: July 16, 2024Assignee: Blue VoyantInventors: Michael E. Cormier, Earl D. Cox, William E. Thackrey, Joseph McGlynn, Harry Gardner
-
Patent number: 12039458Abstract: A system, method, and computer program product for incorporating knowledge from more complex models in simpler models. A method may include obtaining first training data associated with a first set of features and second training data associated with a second set of features different than the first set of features; training a first model based on the first training data and the second training data; and training a second model, using a loss function that depends on an output of an intermediate layer of the first model and an output of the second model, based on the second training data.Type: GrantFiled: January 10, 2019Date of Patent: July 16, 2024Assignee: Visa International Service AssociationInventors: Liang Wang, Xiaobo Dong, Robert Christensen, Liang Gou, Wei Zhang, Hao Yang
-
Patent number: 12033069Abstract: A machine accesses a stored dataset comprising, for each of multiple optical fiber preforms, a plurality of images of each optical fiber preform coupled with an indication of a number of fiber kilometers lost due to diameter upset of a cable built using optical fiber drawn from the optical fiber preform. Each image represents a portion of the optical fiber preform. The machine preprocesses the stored dataset to generate a training dataset. The machine trains, using the training dataset, a convolutional neural network (CNN) to predict diameter upset performance of an optical fiber preform based on visual information representing the optical fiber preform. The CNN comprises an input layer, a plurality of hidden layers, and an output layer. Each of the input layer and the plurality of hidden layers comprises a plurality of artificial neurons. The machine provides an output representing the trained CNN.Type: GrantFiled: May 11, 2020Date of Patent: July 9, 2024Assignee: Corning IncorporatedInventors: Siam B Aumi, Abhishek Jain, Jeffrey Byron Rosbrugh
-
Patent number: 12033089Abstract: Systems, methods, and computer-readable media are disclosed for generating and training a deep convolutional generative model for multivariate time series modeling and utilizing the model to assess time series data indicative of a machine or machine component's operational state over a period of time to detect and localize potential operational anomalies.Type: GrantFiled: September 14, 2017Date of Patent: July 9, 2024Assignee: Siemens AktiengesellschaftInventors: Yuan Chao, Amit Chakraborty
-
Patent number: 12008465Abstract: Some embodiments of the invention provide a novel method for training a multi-layer node network. Some embodiments train the multi-layer network using a set of inputs generated with random misalignments incorporated into the training data set. In some embodiments, the training data set is a synthetically generated training set based on a three-dimensional ground truth model as it would be sensed by a sensor array from different positions and with different deviations from ideal alignment and placement. Some embodiments dynamically generate training data sets when a determination is made that more training is required. Training data sets, in some embodiments, are generated based on training data sets for which the multi-layer node network has produced bad results.Type: GrantFiled: January 12, 2018Date of Patent: June 11, 2024Assignee: PERCEIVE CORPORATIONInventors: Andrew Mihal, Steven Teig
-
Patent number: 12001973Abstract: A computing system may include a model training engine configured to train a supervised learning model with a training set comprising training probability distributions computed for training dies through a local phase of a volume diagnosis procedure. The computing system may also include a volume diagnosis adjustment engine configured to access a diagnosis report for a given circuit die that has failed scan testing and compute, through the local phase of the volume diagnosis procedure, a probability distribution for the given circuit die from the diagnosis report. The volume diagnosis adjustment engine may also adjust the probability distribution into an adjusted probability distribution using the supervised learning model and provide the adjusted probability distribution for the given circuit die as an input to a global phase of the volume diagnosis procedure to determine a global root cause distribution for multiple circuit dies that have failed the scan testing.Type: GrantFiled: March 22, 2019Date of Patent: June 4, 2024Assignee: Siemens Industry Software Inc.Inventors: Gaurav Veda, Wu-Tung Cheng, Manish Sharma, Huaxing Tang, Yue Tian
-
Patent number: 12001926Abstract: Techniques for machine-learning of long-term seasonal patterns are disclosed. In some embodiments, a network service receives a set of time-series data that tracks metric values of at least one computing resource over time. Responsive to receiving the time-series data, the network service detects a subset of metric values that are outliers and associated with a plurality of timestamps. The network service maps the plurality of timestamps to one or more encodings of at least one encoding space that defines a plurality of encodings for different seasonal patterns. Based on the mapped encodings, the network service generates a representation of a seasonal pattern. Based on the representation of the seasonal pattern, the network service may perform one or more operations in association with the at least one computing resource.Type: GrantFiled: October 23, 2018Date of Patent: June 4, 2024Assignee: Oracle International CorporationInventors: Dustin Garvey, Sampanna Shahaji Salunke, Uri Shaft, Sumathi Gopalakrishnan
-
Patent number: 11995574Abstract: Systems, methods, and computer products are described herein for explainable machine learning predictions. An application receives data including a specification that defines a trained machine learning (ML) model. The application parses a model description of the trained ML model. An engine factory creates an instance of an engine based on the model description. The application generates a user interface (UI) for requesting a prediction and an associated explanation using the engine. The UI receives user input data including a requested prediction having one or more influencers. The engine determines and provides the prediction and the associated explanation based on the user input data.Type: GrantFiled: November 19, 2019Date of Patent: May 28, 2024Assignee: SAP SEInventor: David Guillemet
-
Patent number: 11972345Abstract: A multi-label ranking method includes receiving, at a processor and from a first set of artificial neural networks (ANNs), multiple signals representing a first set of ANN output pairs for a first label. A signal representing a second set of ANN output pairs for a second label different from the first label is received at the processor from a second set of ANNs different from the first set of ANNs, substantially concurrently with the first set of ANN output pairs. A first activation function is solved based on the first set of ANN output pairs, and a second activation function is solved based on the second set of ANN output pairs. Loss values are calculated based on the solved activations, and a mask is generated based on at least one ground truth label. A signal, including a representation of the mask, is sent from the processor to each of the sets of ANNs.Type: GrantFiled: April 11, 2019Date of Patent: April 30, 2024Inventors: Vincent Poon, Nigel Paul Duffy, Ravi Kiran Reddy Palla
-
Patent number: 11948079Abstract: The present disclosure discloses a multi-agent coordination method. The method includes: performing multiple data collections on N agents to collect E sets of data, where N and E are integers greater than 1; and optimizing neural networks of the N agents using reinforcement learning based on the E sets of data. Each data collection includes: randomly selecting a first coordination pattern from multiple predetermined coordination patterns; obtaining N observations after the N agents act on an environment in the first coordination pattern; determining a first probability and a second probability that a current coordination pattern is the first coordination pattern based on the N observations; and determining a pseudo reward based on the first probability and the second probability. The E sets of data include: a first coordination pattern label indicating the first coordination pattern, the N observations, and the pseudo reward.Type: GrantFiled: October 19, 2020Date of Patent: April 2, 2024Inventors: Xiangyang Ji, Shuncheng He
-
Patent number: 11941523Abstract: Aspects described herein may allow for the application of stochastic gradient boosting techniques to the training of deep neural networks by disallowing gradient back propagation from examples that are correctly classified by the neural network model while still keeping correctly classified examples in the gradient averaging. Removing the gradient contribution from correctly classified examples may regularize the deep neural network and prevent the model from overfitting. Further aspects described herein may provide for scheduled boosting during the training of the deep neural network model conditioned on a mini-batch accuracy and/or a number of training iterations. The model training process may start un-boosted, using maximum likelihood objectives or another first loss function.Type: GrantFiled: April 16, 2021Date of Patent: March 26, 2024Assignee: Capital One Services, LLCInventors: Oluwatobi Olabiyi, Erik T. Mueller, Christopher Larson
-
Patent number: 11941517Abstract: Systems and methods are disclosed to implement a neural network training system to train a multitask neural network (MNN) to generate a low-dimensional entity representation based on a sequence of events associated with the entity. In embodiments, an encoder is combined with a group of decoders to form a MNN to perform different machine learning tasks on entities. During training, the encoder takes a sequence of events in and generates a low-dimensional representation of the entity. The decoders then take the representation and perform different tasks to predict various attributes of the entity. As the MNN is trained to perform the different tasks, the encoder is also trained to generate entity representations that capture different attribute signals of the entities. The trained encoder may then be used to generate semantically meaningful entity representations for use with other machine learning systems.Type: GrantFiled: November 22, 2017Date of Patent: March 26, 2024Assignee: Amazon Technologies, Inc.Inventors: Arijit Biswas, Subhajit Sanyal
-
Patent number: 11941513Abstract: Provided is a device for ensembling data received from prediction devices and a method of operating the same. The device includes a data manager, a learner, and a predictor. The data manager receives first and second device prediction results from first and second prediction devices, respectively. The learner may adjust a weight group of a prediction model for generating first and second item weights, first and second device weights, based on the first and second device prediction results. The first and second item weights depend on first and second item values, respectively, of the first and second device prediction results. The first device weight corresponds to the first prediction device, and the second device weight corresponds to the second prediction device. The predictor generates an ensemble result of the first and second device prediction results, based on the first and second item weights and the first and second device weights.Type: GrantFiled: November 28, 2019Date of Patent: March 26, 2024Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Myung-Eun Lim, Jae Hun Choi, Youngwoong Han
-
Patent number: 11928576Abstract: The present disclosure describes an artificial neural network circuit including: at least one crossbar circuit to transmit a signal between layered neurons of an artificial neural network, the crossbar circuit including multiple input bars, multiple output bars arranged intersecting the input bars, and multiple memristors that are disposed at respective intersections of the input bars and the output bars to give a weight to the signal to be transmitted; a processing circuit to calculate a sum of signals flowing into each of the output bars while a weight to a corresponding signal is given by each of the memristors; a temperature sensor to detect environmental temperature; and an update portion that updates a trained value used in the crossbar circuit and/or the processing circuit.Type: GrantFiled: October 16, 2019Date of Patent: March 12, 2024Assignee: DENSO CORPORATIONInventors: Irina Kataeva, Shigeki Otsuka
-
Patent number: 11928600Abstract: A method for sequence-to-sequence prediction using a neural network model includes generating an encoded representation based on an input sequence using an encoder of the neural network model and predicting an output sequence based on the encoded representation using a decoder of the neural network model. The neural network model includes a plurality of model parameters learned according to a machine learning process. At least one of the encoder or the decoder includes a branched attention layer. Each branch of the branched attention layer includes an interdependent scaling node configured to scale an intermediate representation of the branch by a learned scaling parameter. The learned scaling parameter depends on one or more other learned scaling parameters of one or more other interdependent scaling nodes of one or more other branches of the branched attention layer.Type: GrantFiled: January 30, 2018Date of Patent: March 12, 2024Assignee: Salesforce, Inc.Inventors: Nitish Shirish Keskar, Karim Ahmed, Richard Socher
-
Patent number: 11880390Abstract: Methods, computer program products, and systems are presented. The methods include, for instance: collecting location data of users and identifying candidates for an impromptu interaction amongst the users based on converging locations of the candidates. A topic of the impromptu interaction is determined by common work interests amongst the candidates. Notification of the impromptu interaction is sent to the candidates to inform the topic and the other candidate, also with resources relevant to the topic.Type: GrantFiled: May 16, 2017Date of Patent: January 23, 2024Assignee: International Business Machines CorporationInventors: James E. Bostick, John M. Ganci, Jr., Martin G. Keen, Sarbajit K. Rakshit
-
Patent number: 11880746Abstract: Media and method for a user interface for training an artificial intelligence system. Many artificial intelligence systems require large volumes of labeled training data before they can accurately classify previously unseen data items. However, for some problem domains, no pre-labeled training data set may be available. Manually labeling training data sets by a subject-matter expert is a laborious process. An interface to enable such a subject-matter expert to accurately, consistently, and quickly label training data sets is disclosed herein. By allowing the subject-matter expert to easily navigate between training data items and select the applicable labels, operation of the computer is improved.Type: GrantFiled: April 26, 2017Date of Patent: January 23, 2024Assignee: HRB Innovations, Inc.Inventors: Daniel Cahoon, Mansoor Syed, Robert T. Wescott
-
Patent number: 11875260Abstract: The architectural complexity of a neural network is reduced by selectively pruning channels. A cost metric for a convolution layer is determined. The cost metric indicates a resource cost per channel for the channels of the layer. Training the neural network includes, for channels of the layer, updating a channel-scaling coefficient based on the cost metric. The channel-scaling coefficient linearly scales the output of the channel. A constant channel is identified based on the channel-scaling coefficients. The neural network is updated by pruning the constant channel. Model weights are updated via a stochastic gradient descent of a training loss function evaluated on training data. The channel-scaling coefficients are updated via an iterative-thresholding algorithm that penalizes a batch normalization loss function based on the cost metric for the layer and a norm of the channel-scaling coefficients.Type: GrantFiled: February 13, 2018Date of Patent: January 16, 2024Assignee: Adobe Inc.Inventors: Xin Lu, Zhe Lin, Jianbo Ye
-
Patent number: 11875273Abstract: Briefly, example methods, apparatuses, and/or articles of manufacture are disclosed that may be implemented, in whole or in part, using one or more computing devices to facilitate and/or support one or more operations and/or techniques for machine learning (ML) classification of digital content for mobile communication devices, such as implemented in connection with one or more computing and/or communication networks and/or protocols.Type: GrantFiled: March 29, 2017Date of Patent: January 16, 2024Assignee: Yahoo Ad Tech LLCInventors: Marc Bron, Mounia Lalmas, Huw Evans, Mahlon Chute, Miriam Redi, Fabrizio Silvestri
-
Patent number: 11836578Abstract: A device receives historical data associated with multiple cloud computing environments, trains one or more machine learning models, with the historical data, to generate trained machine learning models that generate outputs, and trains a model with the outputs to generate a trained model. The device receives particular data, associated with a cloud computing environment, that includes data identifying usage of resources associated with the cloud computing environment, and processes the particular data, with the trained machine learning models, to generate anomaly scores indicating anomalous usage of the resources associated with the cloud computing environment. The device processes the one or more anomaly scores, with the trained model, to generate a final anomaly score indicating anomalous usage of at least one of the resources associated with the cloud computing environment, and performs one or more actions based on the final anomaly score.Type: GrantFiled: August 26, 2019Date of Patent: December 5, 2023Assignee: Accenture Global Solutions LimitedInventors: Kun Qiu, Vijay Desai, Laser Seymour Kaplan, Durga Kalyan Ganjapu, Daniel Marcus Lombardo