Patents Examined by Li B. Zhen
  • Patent number: 11948079
    Abstract: The present disclosure discloses a multi-agent coordination method. The method includes: performing multiple data collections on N agents to collect E sets of data, where N and E are integers greater than 1; and optimizing neural networks of the N agents using reinforcement learning based on the E sets of data. Each data collection includes: randomly selecting a first coordination pattern from multiple predetermined coordination patterns; obtaining N observations after the N agents act on an environment in the first coordination pattern; determining a first probability and a second probability that a current coordination pattern is the first coordination pattern based on the N observations; and determining a pseudo reward based on the first probability and the second probability. The E sets of data include: a first coordination pattern label indicating the first coordination pattern, the N observations, and the pseudo reward.
    Type: Grant
    Filed: October 19, 2020
    Date of Patent: April 2, 2024
    Inventors: Xiangyang Ji, Shuncheng He
  • Patent number: 11941517
    Abstract: Systems and methods are disclosed to implement a neural network training system to train a multitask neural network (MNN) to generate a low-dimensional entity representation based on a sequence of events associated with the entity. In embodiments, an encoder is combined with a group of decoders to form a MNN to perform different machine learning tasks on entities. During training, the encoder takes a sequence of events in and generates a low-dimensional representation of the entity. The decoders then take the representation and perform different tasks to predict various attributes of the entity. As the MNN is trained to perform the different tasks, the encoder is also trained to generate entity representations that capture different attribute signals of the entities. The trained encoder may then be used to generate semantically meaningful entity representations for use with other machine learning systems.
    Type: Grant
    Filed: November 22, 2017
    Date of Patent: March 26, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Arijit Biswas, Subhajit Sanyal
  • Patent number: 11941523
    Abstract: Aspects described herein may allow for the application of stochastic gradient boosting techniques to the training of deep neural networks by disallowing gradient back propagation from examples that are correctly classified by the neural network model while still keeping correctly classified examples in the gradient averaging. Removing the gradient contribution from correctly classified examples may regularize the deep neural network and prevent the model from overfitting. Further aspects described herein may provide for scheduled boosting during the training of the deep neural network model conditioned on a mini-batch accuracy and/or a number of training iterations. The model training process may start un-boosted, using maximum likelihood objectives or another first loss function.
    Type: Grant
    Filed: April 16, 2021
    Date of Patent: March 26, 2024
    Assignee: Capital One Services, LLC
    Inventors: Oluwatobi Olabiyi, Erik T. Mueller, Christopher Larson
  • Patent number: 11941513
    Abstract: Provided is a device for ensembling data received from prediction devices and a method of operating the same. The device includes a data manager, a learner, and a predictor. The data manager receives first and second device prediction results from first and second prediction devices, respectively. The learner may adjust a weight group of a prediction model for generating first and second item weights, first and second device weights, based on the first and second device prediction results. The first and second item weights depend on first and second item values, respectively, of the first and second device prediction results. The first device weight corresponds to the first prediction device, and the second device weight corresponds to the second prediction device. The predictor generates an ensemble result of the first and second device prediction results, based on the first and second item weights and the first and second device weights.
    Type: Grant
    Filed: November 28, 2019
    Date of Patent: March 26, 2024
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Myung-Eun Lim, Jae Hun Choi, Youngwoong Han
  • Patent number: 11928600
    Abstract: A method for sequence-to-sequence prediction using a neural network model includes generating an encoded representation based on an input sequence using an encoder of the neural network model and predicting an output sequence based on the encoded representation using a decoder of the neural network model. The neural network model includes a plurality of model parameters learned according to a machine learning process. At least one of the encoder or the decoder includes a branched attention layer. Each branch of the branched attention layer includes an interdependent scaling node configured to scale an intermediate representation of the branch by a learned scaling parameter. The learned scaling parameter depends on one or more other learned scaling parameters of one or more other interdependent scaling nodes of one or more other branches of the branched attention layer.
    Type: Grant
    Filed: January 30, 2018
    Date of Patent: March 12, 2024
    Assignee: Salesforce, Inc.
    Inventors: Nitish Shirish Keskar, Karim Ahmed, Richard Socher
  • Patent number: 11928576
    Abstract: The present disclosure describes an artificial neural network circuit including: at least one crossbar circuit to transmit a signal between layered neurons of an artificial neural network, the crossbar circuit including multiple input bars, multiple output bars arranged intersecting the input bars, and multiple memristors that are disposed at respective intersections of the input bars and the output bars to give a weight to the signal to be transmitted; a processing circuit to calculate a sum of signals flowing into each of the output bars while a weight to a corresponding signal is given by each of the memristors; a temperature sensor to detect environmental temperature; and an update portion that updates a trained value used in the crossbar circuit and/or the processing circuit.
    Type: Grant
    Filed: October 16, 2019
    Date of Patent: March 12, 2024
    Assignee: DENSO CORPORATION
    Inventors: Irina Kataeva, Shigeki Otsuka
  • Patent number: 11880746
    Abstract: Media and method for a user interface for training an artificial intelligence system. Many artificial intelligence systems require large volumes of labeled training data before they can accurately classify previously unseen data items. However, for some problem domains, no pre-labeled training data set may be available. Manually labeling training data sets by a subject-matter expert is a laborious process. An interface to enable such a subject-matter expert to accurately, consistently, and quickly label training data sets is disclosed herein. By allowing the subject-matter expert to easily navigate between training data items and select the applicable labels, operation of the computer is improved.
    Type: Grant
    Filed: April 26, 2017
    Date of Patent: January 23, 2024
    Assignee: HRB Innovations, Inc.
    Inventors: Daniel Cahoon, Mansoor Syed, Robert T. Wescott
  • Patent number: 11880390
    Abstract: Methods, computer program products, and systems are presented. The methods include, for instance: collecting location data of users and identifying candidates for an impromptu interaction amongst the users based on converging locations of the candidates. A topic of the impromptu interaction is determined by common work interests amongst the candidates. Notification of the impromptu interaction is sent to the candidates to inform the topic and the other candidate, also with resources relevant to the topic.
    Type: Grant
    Filed: May 16, 2017
    Date of Patent: January 23, 2024
    Assignee: International Business Machines Corporation
    Inventors: James E. Bostick, John M. Ganci, Jr., Martin G. Keen, Sarbajit K. Rakshit
  • Patent number: 11875260
    Abstract: The architectural complexity of a neural network is reduced by selectively pruning channels. A cost metric for a convolution layer is determined. The cost metric indicates a resource cost per channel for the channels of the layer. Training the neural network includes, for channels of the layer, updating a channel-scaling coefficient based on the cost metric. The channel-scaling coefficient linearly scales the output of the channel. A constant channel is identified based on the channel-scaling coefficients. The neural network is updated by pruning the constant channel. Model weights are updated via a stochastic gradient descent of a training loss function evaluated on training data. The channel-scaling coefficients are updated via an iterative-thresholding algorithm that penalizes a batch normalization loss function based on the cost metric for the layer and a norm of the channel-scaling coefficients.
    Type: Grant
    Filed: February 13, 2018
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Xin Lu, Zhe Lin, Jianbo Ye
  • Patent number: 11875273
    Abstract: Briefly, example methods, apparatuses, and/or articles of manufacture are disclosed that may be implemented, in whole or in part, using one or more computing devices to facilitate and/or support one or more operations and/or techniques for machine learning (ML) classification of digital content for mobile communication devices, such as implemented in connection with one or more computing and/or communication networks and/or protocols.
    Type: Grant
    Filed: March 29, 2017
    Date of Patent: January 16, 2024
    Assignee: Yahoo Ad Tech LLC
    Inventors: Marc Bron, Mounia Lalmas, Huw Evans, Mahlon Chute, Miriam Redi, Fabrizio Silvestri
  • Patent number: 11836578
    Abstract: A device receives historical data associated with multiple cloud computing environments, trains one or more machine learning models, with the historical data, to generate trained machine learning models that generate outputs, and trains a model with the outputs to generate a trained model. The device receives particular data, associated with a cloud computing environment, that includes data identifying usage of resources associated with the cloud computing environment, and processes the particular data, with the trained machine learning models, to generate anomaly scores indicating anomalous usage of the resources associated with the cloud computing environment. The device processes the one or more anomaly scores, with the trained model, to generate a final anomaly score indicating anomalous usage of at least one of the resources associated with the cloud computing environment, and performs one or more actions based on the final anomaly score.
    Type: Grant
    Filed: August 26, 2019
    Date of Patent: December 5, 2023
    Assignee: Accenture Global Solutions Limited
    Inventors: Kun Qiu, Vijay Desai, Laser Seymour Kaplan, Durga Kalyan Ganjapu, Daniel Marcus Lombardo
  • Patent number: 11829855
    Abstract: Training query intents are allocated for multiple training entities into training time intervals in a time series based on a corresponding query intent time for each training query intent. Training performance results for the multiple training entities are allocated into the training time intervals in the time series based on a corresponding performance time of each training performance result. A machine learning model for a training milestone of the time series is trained based on the training query intents allocated to a training time interval prior to the training milestone and the training performance results allocated to a training time interval after the training milestone. Target performance for the target entity for an interval after a target milestone in the time series is predicted by inputting to the trained machine learning model target query intents allocated to the target entity in a target time interval before the target milestone.
    Type: Grant
    Filed: May 25, 2022
    Date of Patent: November 28, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Mayank Shrivastava, Hui Zhou, Pushpraj Shukla, Emre Hamit Kok, Sonal Prakash Mane, Dimitrios Brisimitzis
  • Patent number: 11822616
    Abstract: Disclosed are a method and an apparatus for performing an operation of a convolutional layer in a convolutional neural network.
    Type: Grant
    Filed: November 28, 2018
    Date of Patent: November 21, 2023
    Assignee: Nanjing Horizon Robotics Technology Co., Ltd.
    Inventors: Delin Li, Kun Ling, Liang Chen, Jianjun Li
  • Patent number: 11803745
    Abstract: A method for estimating firefighting data includes: obtaining firefighting condition data of a site, wherein the firefighting condition data comprises information on firefighting equipment, information on flammable articles; and estimating firefighting input data and firefighting damage data based on the firefighting condition data using a simulation analysis model, wherein the simulation analysis model is created based on firefighting condition data, firefighting input data and firefighting damage data of different sites.
    Type: Grant
    Filed: April 9, 2020
    Date of Patent: October 31, 2023
    Assignee: Fulian Precision Electronics (Tianjin) Co., LTD.
    Inventor: Shih-Cheng Wang
  • Patent number: 11797868
    Abstract: At least some embodiments are directed to an insights inference system that produces multiple insights associated with an entity. The insights inference system generates a decision tree machine learning model, assigning a first insight to a parent node of a decision tree machine learning model and assigning at least one second insight to child nodes of the decision tree machine learning model. Each child node is associated with a sequence number and a rank number. The sequence number and the rank number are indicative of a significance associated with the at least one second insight. The insight inference system responds to queries by traversing the decision tree machine learning model to compute at least one response insight based on the sequence number and the rank number associated with each child node and outputs the at least one response insight to a client terminal.
    Type: Grant
    Filed: December 5, 2019
    Date of Patent: October 24, 2023
    Assignee: American Express Travel Related Services Company, Inc.
    Inventors: Varun Agarwal, Krishnaprasad Narayanan, Rahul Ghosh, Swetha P. Srinivasan, Anshul Jain, Bobby Chetal, Ashni Jauhary
  • Patent number: 11790242
    Abstract: Techniques are described for generating and applying mini-machine learning variants of machine learning algorithms to save computational resources in tuning and selection of machine learning algorithms. In an embodiment, at least one of the hyper-parameter values for a reference variant is modified to a new hyper-parameter value thereby generating a new variant of machine learning algorithm from the reference variant of machine learning algorithm. A performance score is determined for the new variant of machine learning algorithm using a training dataset, the performance score representing the accuracy of the new machine learning model for the training dataset. By performing training of the new variant of machine learning algorithm with the training data set, a cost metric of the new variant of machine learning algorithm is measured by measuring usage the used computing resources for the training.
    Type: Grant
    Filed: October 19, 2018
    Date of Patent: October 17, 2023
    Assignee: Oracle International Corporation
    Inventors: Sandeep Agrawal, Venkatanathan Varadarajan, Sam Idicula, Nipun Agarwal
  • Patent number: 11783164
    Abstract: The technology disclosed provides a so-called “joint many-task neural network model” to solve a variety of increasingly complex natural language processing (NLP) tasks using growing depth of layers in a single end-to-end model. The model is successively trained by considering linguistic hierarchies, directly connecting word representations to all model layers, explicitly using predictions in lower tasks, and applying a so-called “successive regularization” technique to prevent catastrophic forgetting. Three examples of lower level model layers are part-of-speech (POS) tagging layer, chunking layer, and dependency parsing layer. Two examples of higher level model layers are semantic relatedness layer and textual entailment layer. The model achieves the state-of-the-art results on chunking, dependency parsing, semantic relatedness and textual entailment.
    Type: Grant
    Filed: October 26, 2020
    Date of Patent: October 10, 2023
    Assignee: Salesforce.com, Inc.
    Inventors: Kazuma Hashimoto, Caiming Xiong, Richard Socher
  • Patent number: 11775876
    Abstract: A method comprising, by a processing unit and a memory: obtaining a training set of data; dividing sets of data into a plurality of groups, wherein all sets of data for which feature values meet at least one similarity criterion, are in the same group, storing in a reduced training set of data, for each group, at least one aggregated set of data, wherein, for a plurality of the groups, a number of aggregated sets of data is less than a number of the sets of data of the group, wherein the reduced training set of data is suitable to be used in a classification algorithm for determining a relationship between the at least one label and the features of the electronic items, thereby reducing computation complexity when processing the reduced training set of data, compared to processing the training set of data.
    Type: Grant
    Filed: August 20, 2019
    Date of Patent: October 3, 2023
    Assignee: Optimal Plus Ltd.
    Inventor: Katsuhiro Shimazu
  • Patent number: 11755879
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for processing and storing inputs for use in a neural network. One of the methods includes receiving input data for storage in a memory system comprising a first set of memory blocks, the memory blocks having an associated order; passing the input data to a highest ordered memory block; for each memory block for which there is a lower ordered memory block: applying a filter function to data currently stored by the memory block to generate filtered data and passing the filtered data to a lower ordered memory block; and for each memory block: combining the data currently stored in the memory block with the data passed to the memory block to generate updated data, and storing the updated data in the memory block.
    Type: Grant
    Filed: February 11, 2019
    Date of Patent: September 12, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Razvan Pascanu, William Clinton Dabney, Thomas Stepleton
  • Patent number: 11755953
    Abstract: A method, system and computer readable medium for generating a cognitive insight comprising: receiving data, the data comprising a plurality of examples, each of the plurality of examples comprising an input object and a desired output value, at least some of the plurality of examples being based upon feedback from a user; performing a machine learning operation on the data, the machine learning operation comprising performing an augmented gamma belief network operation, the augmented gamma belief network operation producing an inferred function based upon the data; and, generating a cognitive insight based upon the cognitive profile generated using the inferred function generated by the augmented gamma belief network operation.
    Type: Grant
    Filed: December 31, 2020
    Date of Patent: September 12, 2023
    Assignee: Tecnotree Technologies, Inc.
    Inventors: Ayan Acharya, Matthew Sanchez