Patents Examined by Fernández Rivas
  • Patent number: 11715021
    Abstract: A variable embedding method, for solving a large-scale problem using dedicated hardware by dividing variables of a problem graph into partial problems and by repeating an optimization process of the partial problems when an interaction of the variables of an optimization problem is expressed in the problem graph, includes: determining whether a duplicate allocation of the variables of the optimization problem to the vertices of the hardware graph is required when embedding at least a part of all the variables into the vertices of the hardware graph; and selecting one of the variables requiring no duplicate allocation and embedding selected variable in one of the vertices of the hardware graph without using another one of the variables requiring the duplicate allocation as one of the variables of the partial problem.
    Type: Grant
    Filed: June 18, 2019
    Date of Patent: August 1, 2023
    Assignees: DENSO CORPORATION, TOHOKU UNIVERSITY
    Inventors: Shuntaro Okada, Masayoshi Terabe, Masayuki Ohzeki
  • Patent number: 11711558
    Abstract: A method implemented by one or more computing systems includes accessing content viewing data associated with a first user account, wherein the first user account is associated with one or more client devices. The content viewing data includes temporal-based content viewing data. The method further includes determining, using one or more sequence models, a set of content viewing features based on the temporal-based content viewing data, and concatenating the content viewing features into a single computational array. The method further includes providing, through one or more dense layers of a deep-learning model, the single computational array to an output layer of the deep-learning model, and calculating, based on the output layer, one or more probabilities for one or more labels for the first user account. Each label includes a predicted attribute for the first user account.
    Type: Grant
    Filed: August 4, 2020
    Date of Patent: July 25, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Tomasz Jan Palczewski, Praveen Pratury, Hyun Chul Lee, Hyun-Woo Kim
  • Patent number: 11704571
    Abstract: A method for pruning weights of an artificial neural network based on a learned threshold includes determining a pruning threshold for pruning a first set of pre-trained weights of multiple pre-trained weights based on a function of a classification loss and a regularization loss. Weights are pruned from the first set of pre-trained weights when a first value of the weight is less than the pruning threshold. A second set of pre-trained weights of the multiple pre-trained weights is fine-tuned or adjusted in response to a second value of each pre-trained weight in the second set of pre-trained weights being greater than the pruning threshold.
    Type: Grant
    Filed: October 9, 2020
    Date of Patent: July 18, 2023
    Assignee: QUALCOMM Incorporated
    Inventors: Kambiz Azarian Yazdi, Tijmen Pieter Frederik Blankevoort, Jin Won Lee, Yash Sanjay Bhalgat
  • Patent number: 11704589
    Abstract: Disclosed are various embodiments for automatically identifying whether applications are static or dynamic. In one embodiment, code of an application is analyzed to determine instances of requesting data via a network in the application. Characteristics of the instances of requesting data via the network are provided to a machine learning model. The application is automatically classified as either dynamic or static according to the machine learning model.
    Type: Grant
    Filed: March 20, 2017
    Date of Patent: July 18, 2023
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Saurabh Sohoney, Vineet Shashikant Chaoji, Pranav Garg
  • Patent number: 11699064
    Abstract: A neural network system executable on a processor. The neural network system, when executed on the processor, comprises a merged layer shareable between a first neural network and a second neural network. The merged layer is configured to receive input data from a prior layer of at least one of the first and second neural networks. The merged layer is configured to apply a superset of weights to the input data to generate intermediate feature data representative of at least one feature of the input data, the superset of weights being combined from a first set of weights associated with the first neural network and a second set of weights associated with the second neural network. The merged layer is also configured to output the intermediate feature data to at least one subsequent layer, the at least one subsequent layer serving the first and second neural networks.
    Type: Grant
    Filed: April 23, 2019
    Date of Patent: July 11, 2023
    Assignee: Arm Limited
    Inventors: Daren Croxford, Roberto Lopez Mendez
  • Patent number: 11687784
    Abstract: An artificial intelligence system and a method for searching for an optimal model are provided. A method for searching for a learning mode of an artificial intelligence system includes receiving, by an operator included in a first node, first channels, deriving, by the operator included in the first node, first parameter weight indexes corresponding to weights of first parameters by calculating the first parameters corresponding to each of the received first channels with the received first channels, generating and outputting a second channel group by combining the first channel with the other channel, receiving, by an operator included in a second node, second channels included in the second channel group, and deriving, by the operator included in the second node, second parameter weight indexes corresponding to weights of second parameters by calculating the second parameters corresponding to the received second channels with the received second channels.
    Type: Grant
    Filed: February 21, 2019
    Date of Patent: June 27, 2023
    Assignee: DAEGU GYEONGBUK INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Hee Chul Lim, Min Soo Kim
  • Patent number: 11687795
    Abstract: A hybrid knowledge representation is searched for a machine learning component corresponding to a search query. The hybrid knowledge representation may be structured as nodes representing machine learning workflow components and edges (e.g., links) connecting the nodes based on relationships between the nodes. Responsive to finding the machine learning component in the hybrid knowledge representation, the machine learning component is returned. Responsive to not finding the machine learning component in the hybrid knowledge representation, the hybrid knowledge representation is searched for machine learning model fragments associated with building the machine learning component, generating a new machine learning component by combining the machine learning model fragments and returning the new machine learning component.
    Type: Grant
    Filed: February 19, 2019
    Date of Patent: June 27, 2023
    Assignee: International Business Machines Corporation
    Inventors: Marcio Ferreira Moreno, Daniel Salles Civitarese, Lucas Correia Villa Real, Rafael Rossi de Mello Brandao, Renato Fontoura de Gusmao Cerqueira
  • Patent number: 11676035
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network. The neural network has a plurality of differentiable weights and a plurality of non-differentiable weights. One of the methods includes determining trained values of the plurality of differentiable weights and the non-differentiable weights by repeatedly performing operations that include determining an update to the current values of the plurality of differentiable weights using a machine learning gradient-based training technique and determining, using an evolution strategies (ES) technique, an update to the current values of a plurality of distribution parameters.
    Type: Grant
    Filed: January 23, 2020
    Date of Patent: June 13, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Karel Lenc, Karen Simonyan, Tom Schaul, Erich Konrad Elsen
  • Patent number: 11669724
    Abstract: Subject matter regards improving machine learning techniques using informed pseudolabels. A method can include receiving previously assigned labels indicating an expected classification for data, the labels having a specified uncertainty, generating respective pseudolabels for the data based on the previously assigned labels, the data, a class vector determined by an ML model, and a noise model indicating, based on the specified uncertainty, a likelihood of the previously assigned label given the class, and substituting the pseudolabels for the previously assigned labels in a next epoch of training the ML model.
    Type: Grant
    Filed: May 16, 2019
    Date of Patent: June 6, 2023
    Assignee: Raytheon Company
    Inventors: Philip A. Sallee, James Mullen, Franklin Tanner
  • Patent number: 11669915
    Abstract: Systems, methods, and non-transitory computer-readable media can identify a set of accounts, each account of the set of accounts having a number of followers. The set of accounts are grouped into a plurality of groups based on number of followers, wherein each group is associated with a value score. A machine learning model is trained using a set of training data comprising account recommendation conversion information, wherein the account recommendation conversion information comprises a plurality of successful account recommendations, and each successful account recommendation is assigned a weight based on the value scores associated with the plurality of groups. One or more accounts of the set of accounts are selected to present as account recommendations based on the machine learning model.
    Type: Grant
    Filed: September 27, 2017
    Date of Patent: June 6, 2023
    Assignee: Meta Platforms, Inc.
    Inventors: Alan Si, Jialu Zhu, Sourav Chatterji, Brian Dolhansky
  • Patent number: 11663125
    Abstract: Computer-implemented methods using machine learning are provided for generating an estimated cache performance of a cache configuration. A neural network is trained using, as inputs, a set of memory access parameters generated from a non-cycle-accurate simulation of a data processing system comprising the cache configuration and a cache configuration value, and using, as outputs, cache performance values generated by a cycle-accurate simulation of the data processing system comprising the cache configuration. The trained neural network is then provided with sets of memory access parameters generated from a non-cycle-accurate simulation of a proposed data processing system and a selected cache configuration and generates estimated cache performance values for that selected cache configuration.
    Type: Grant
    Filed: May 24, 2018
    Date of Patent: May 30, 2023
    Assignee: ARM LIMITED
    Inventors: Varun Subramanian, Emmanuel Manrico III Mendoza
  • Patent number: 11657285
    Abstract: Methods, systems and media for random semi-structured row-wise pruning of filters of a convolutional neural network are described. Rows of weights are pruned from kernels of filters of a convolutional layer of a convolutional neural network according to a pseudo-randomly-generated row pruning mask. The convolutional neural network is trained to perform a particular task using the pruned filters that include the rows of weights that have not been pruned from the kernels of filters. The process may be repeated multiple times, with the best-performing row pruning mask being selected for use in pruning row weights from kernel filters when the trained convolutional neural network is deployed to processing system and used for an inference. Computation time may be decreased further with the use of multiple parallel hardware computation units of a processing system performing pipelined row-wise convolution.
    Type: Grant
    Filed: July 30, 2020
    Date of Patent: May 23, 2023
    Assignee: XFUSION DIGITAL TECHNOLOGIES CO., LTD.
    Inventors: Vanessa Courville, Mehdi Ahmadi, Mahdi Zolnouri
  • Patent number: 11657265
    Abstract: Described herein are systems and methods for training first and second neural network models. A system comprises a memory comprising instruction data representing a set of instructions and a processor configured to communicate with the memory and to execute the set of instructions. The set of instructions, when executed by the processor, cause the processor to set a weight in the second model based on a corresponding weight in the first model, train the second model on a first dataset, wherein the training comprises updating the weight in the second model and adjust the corresponding weight in the first model based on the updated weight in the second model.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: May 23, 2023
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Binyam Gebre, Erik Bresch, Dimitrios Mavroeidis, Teun van den Heuvel, Ulf Grossekathöfer
  • Patent number: 11645509
    Abstract: Embodiments for training a neural network using sequential tasks are provided. A plurality of sequential tasks are received. For each task in the plurality of tasks a copy of the neural network that includes a plurality of layers is generated. From the copy of the neural network a task specific neural network is generated by performing an architectural search on the plurality of layers in the copy of the neural network. The architectural search identifies a plurality of candidate choices in the layers of the task specific neural network. Parameters in the task specific neural network that correspond to the plurality of candidate choices and that maximize architectural weights at each layer are identified. The parameters are retrained and merged with the neural network. The neural network trained on the plurality of sequential tasks is a trained neural network.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: May 9, 2023
    Assignee: Salesforce.com, Inc.
    Inventors: Yingbo Zhou, Xilai Li, Caiming Xiong
  • Patent number: 11640521
    Abstract: A multi-task feature sharing neural network-based intelligent fault diagnosis method has the following steps: (1) separately collecting original vibration acceleration signals of rotating machinery under different experimental conditions, forming samples by means of intercepting signal data having a certain length, and performing labeling; (2) constructing a multi-task feature sharing neural network, having: an input layer, a feature extractor, a classification model and a prediction model; (3) using multi-task joint training to simultaneously train the classification model and the prediction model; and (4) inputting a vibration acceleration signal collected in an actual industrial environment into the trained models to obtain a multi-task diagnosis result.
    Type: Grant
    Filed: October 31, 2019
    Date of Patent: May 2, 2023
    Assignee: SOUTH CHINA UNIVERSITY OF TECHNOLOGY
    Inventors: Weihua Li, Zhen Wang, Ruyi Huang
  • Patent number: 11636374
    Abstract: The disclosure is in the technical field of circuit-model quantum computation. Generally, it concerns methods to use quantum computers to perform computations on classical spin models, where the classical spin models involve a number of spins that is exponential in the number of qubits that comprise the quantum computer. Examples of such computations include optimization and calculation of thermal properties, but extend to a wide variety of calculations that can be performed using the configuration of a spin model with an exponential number of spins. Spin models encompass optimization problems, physics simulations, and neural networks (there is a correspondence between a single spin and a single neuron). This disclosure has applications in these three areas as well as any other area in which a spin model can be used.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: April 25, 2023
    Assignee: QC Ware Corp.
    Inventors: Peter L. McMahon, Robert Michael Parrish
  • Patent number: 11625570
    Abstract: A determination apparatus generates an interval vector having a plurality of components that are adjacent occurrence intervals between a plurality of events that have occurred in chronological order. The determination apparatus generates a plurality of local variable points each of which includes specific components as one set of coordinates, using a predetermined number of consecutive interval vectors in the chronological order. The determination apparatus generates a Betti sequence by applying persistent homology transform to the plurality of local variable points for which the interval vectors serving as starting points are different. The determination apparatus determines a type of the plurality of events based on the Betti sequence.
    Type: Grant
    Filed: November 28, 2018
    Date of Patent: April 11, 2023
    Assignee: FUJITSU LIMITED
    Inventor: Yuhei Umeda
  • Patent number: 11625573
    Abstract: A first neural network is operated on a processor and a memory to encode a first natural language string into a first sentence encoding including a set of word encodings. Using a word-based attention mechanism with a context vector, a weight value for a word encoding within the first sentence encoding is adjusted to form an adjusted first sentence encoding. Using a sentence-based attention mechanism, a first relationship encoding corresponding to the adjusted first sentence encoding is determined. An absolute difference between the first relationship encoding and a second relationship encoding is computed. Using a multi-layer perceptron, a degree of analogical similarity between the first relationship encoding and a second relationship encoding is determined.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: April 11, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Alfio Massimiliano Gliozzo, Gaetano Rossiello, Robert G. Farrell
  • Patent number: 11621085
    Abstract: Embodiments in the present disclosure relate generally to computer network architectures for machine learning, artificial intelligence, and active updates of outcomes. Embodiments of computer network architecture automatically update forecasts of outcomes of patient episodes and annual costs for each patient of interest after hospital discharge. Embodiments may generate such updated forecasts either occasionally on demand, or periodically, or as triggered by events such as an update of available data for such forecasts. Embodiments may include a combination of third-party databases to generate the updated forecasts for pending patient clinical episodes, and to drive the forecasting models for the same, including social media data, financial data, socio-economic data, medical data, search engine data, e-commerce site data, and other databases.
    Type: Grant
    Filed: April 18, 2019
    Date of Patent: April 4, 2023
    Assignee: CLARIFY HEALTH SOLUTIONS, INC.
    Inventors: Todd Gottula, Jean P. Drouin, Yale Wang, Samuel H. Bauknight, Adam F. Rogow, Jeffrey D. Larson, Justin Warner, Erik Talvola
  • Patent number: 11615300
    Abstract: A neural network system includes an input layer, one or more hidden layers, and an output layer. A first layer circuit implements a first layer of the one or more hidden layers. The first layer includes a first weight space including one or more subgroups. A forward path circuit of the first layer circuit includes a multiply and accumulate circuit to receive an input from a layer preceding the first layer; and provide a first subgroup weighted sum using the input and a first plurality weights associated with a first subgroup. A scaling coefficient circuit provides a first scaling coefficient associated with the first subgroup, and applies the first scaling coefficient to the first subgroup weighted sum to generate a first subgroup scaled weighted sum. An activation circuit generates an activation based on the first subgroup scaled weighted sum and provide the activation to a layer following the first layer.
    Type: Grant
    Filed: June 13, 2018
    Date of Patent: March 28, 2023
    Assignee: XILINX, INC.
    Inventors: Julian Faraone, Michaela Blott, Nicholas Fraser