Patents Examined by Vincent Gonzales
  • Patent number: 11900218
    Abstract: Methods, systems, and apparatus for solving computational tasks using quantum computing resources. In one aspect a method includes receiving, at a quantum formulation solver, data representing a computational task to be performed; deriving, by the quantum formulation solver, a formulation of the data representing the computational task that is formulated for a selected type of quantum computing resource; routing, by the quantum formulation solver, the formulation of the data representing the computational task to a quantum computing resource of the selected type to obtain data representing a solution to the computational task; generating, at the quantum formulation solver, output data including data representing a solution to the computational task; and receiving, at a broker, the output data and generating one or more actions to be taken based on the output data.
    Type: Grant
    Filed: December 28, 2022
    Date of Patent: February 13, 2024
    Assignee: Accenture Global Solutions Limited
    Inventor: Kirby Linvill
  • Patent number: 11886992
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a Q network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining a plurality of experience tuples and training the Q network on each of the experience tuples using the Q network and a target Q network that is identical to the Q network but with the current values of the parameters of the target Q network being different from the current values of the parameters of the Q network.
    Type: Grant
    Filed: August 3, 2020
    Date of Patent: January 30, 2024
    Assignee: DeepMind Technologies Limited
    Inventors: Hado Philip van Hasselt, Arthur Clément Guez
  • Patent number: 11875238
    Abstract: A computing system obtains a first preconfigured feature set. The first preconfigured feature set defines: a first feature definition defining an input variable, and first computer instructions for locating first data. The first data is available for retrieval because it is stored, or set-up to arrive, in the feature storage according to the first preconfigured feature set. The computing system receives a requested data set for the input variable. The computing system generates an availability status indicating whether the request data set is available for retrieval according to the first preconfigured feature set. Based on the availability status, generating, by the computing system, the requested data set by: retrieving historical data for the first preconfigured feature set; retrieving a data definition associated with the historical data; and generating the requested data based on the historical data and the data definition.
    Type: Grant
    Filed: June 23, 2022
    Date of Patent: January 16, 2024
    Assignee: SAS INSTITUTE INC.
    Inventors: Piotr Kaczynski, Aneta Maksymiuk, Artur Lukasz Skalski, Wioletta Paulina Stobieniecka, Dwijendra Nath Dwivedi
  • Patent number: 11853865
    Abstract: A circuit for performing neural network computations for a neural network, the circuit comprising: a systolic array comprising a plurality of cells; a weight fetcher unit configured to, for each of the plurality of neural network layers: send, for the neural network layer, a plurality of weight inputs to cells along a first dimension of the systolic array; and a plurality of weight sequencer units, each weight sequencer unit coupled to a distinct cell along the first dimension of the systolic array, the plurality of weight sequencer units configured to, for each of the plurality of neural network layers: shift, for the neural network layer, the plurality of weight inputs to cells along the second dimension of the systolic array over a plurality of clock cycles and where each cell is configured to compute a product of an activation input and a respective weight input using multiplication circuitry.
    Type: Grant
    Filed: December 28, 2020
    Date of Patent: December 26, 2023
    Assignee: Google LLC
    Inventor: Jonathan Ross
  • Patent number: 11847560
    Abstract: A dynamic equilibrium (DEQ) model circuit includes a first multiplier configured to receive an input, scale the input by a first weight, and output the scaled input, second multiplier configured to receive a root, scale the root by a second weight, and output the scaled root, a summation block configured to combine the scaled input, a bias input, and the scaled root and output a non-linear input, and a first non-linear function configured to receive the non-linear input and output the root, wherein the first weight and second weight are based on a trained DEQ model of a neural network.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: December 19, 2023
    Assignee: Robert Bosch GmbH
    Inventors: Jeremy Kolter, Kenneth Wojciechowski, Efthymios Papageorgiou, Sayyed Mahdi Kashmiri
  • Patent number: 11822308
    Abstract: A polishing tool wear amount prediction device, machine learning device, and system capable of predicting a wear amount of a polishing tool unit of a polishing tool during polishing are provided. The polishing tool wear amount prediction device includes a machine learning device which observes polishing condition data indicating a processing condition of polishing as a state variable indicating a current environment state and performs, based on the state variable, learning or prediction by using a learning model which stores a correlation of the wear amount of the polishing tool with respect to the processing condition of polishing.
    Type: Grant
    Filed: February 5, 2019
    Date of Patent: November 21, 2023
    Assignee: FANUC CORPORATION
    Inventor: He Zhang
  • Patent number: 11823063
    Abstract: Individual distributed processing nodes packetize distributed data for each weight of a neural network of a learning object in an order of a number of the weight, transmit the distributed data to an aggregation processing node, acquire aggregation data transmitted from the node in order, and update the weight of the neural network. The node acquires the transmitted distributed data, packetizes the aggregation data for which the distributed data of all the distributed processing nodes is aggregated for each weight, and transmits the aggregation data to the individual nodes. The individual nodes monitor an unreceived data amount which is a difference between data amounts of the transmitted distributed data and the acquired aggregation data, and when the unreceived data amount becomes equal to or larger than a threshold Ma, stops transmission of the distributed data until the unreceived data amount becomes equal to or smaller than a threshold Mb (Mb<Ma).
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: November 21, 2023
    Assignee: Nippon Telegraph and Telephone Corporation
    Inventors: Tsuyoshi Ito, Kenji Kawai, Junichi Kato, Huycu Ngo, Yuki Arikawa, Takeshi Sakamoto
  • Patent number: 11816547
    Abstract: A calibration method, a calibration apparatus, a terminal device and a storage medium are provided. The method comprises the following steps: determining layer attribute information of each to-be-calibrated layer in a model (S110); and determining the group in which each of the to-be-calibrated layers is located according to the total available resources and the layer attribute information of each of the to-be-calibrated layers (S120). The layer attribute information of any of the to-be-calibrated layers comprises layer required resources, the layer required resources are resources needing to be occupied when the to-be-calibrated layer is calibrated; and the total available resources are the total resources used for calibration.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: November 14, 2023
    Assignee: LYNXI TECHNOLOGIES CO., LTD.
    Inventors: Han Li, Yaolong Zhu
  • Patent number: 11816588
    Abstract: In general, embodiments of the present invention provide systems, methods and computer readable media to forecast demand by implementing an online demand prediction framework that includes a hierarchical temporal memory network (HTM) configured to learn temporal patterns representing sequences of states of time-series data collected from a set of one or more data sources representing demand and input to the HTM. In some embodiments, the HTM learns the temporal patterns using a Cortical Learning Algorithm.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: November 14, 2023
    Assignee: Groupon, Inc.
    Inventors: Patrick George Flor, Dylan Griffith, Riva Ashley Vanderveld
  • Patent number: 11797820
    Abstract: Techniques are provided for reinforcement learning software agents enhanced by external data. A reinforcement learning model supporting the software agent may be trained based on information obtained from one or more knowledge stores, such as online forums. The trained reinforcement learning model may be tested in an environment with limited connectivity to an external environment to meet performance criteria. The reinforcement learning software agent may be deployed with the tested and trained reinforcement learning model within an environment to autonomously perform actions to process requests.
    Type: Grant
    Filed: December 5, 2019
    Date of Patent: October 24, 2023
    Assignee: International Business Machines Corporation
    Inventors: Tathagata Chakraborti, Kartik Talamadupula, Kshitij Fadnis, Biplav Srivastava, Murray S. Campbell
  • Patent number: 11797840
    Abstract: Methods and systems for using machine learning to identify extremely rare events in high-dimensional space are disclosed. A method includes: identifying, by a computing device, a plurality of derived attributes using an external data source; selecting, by the computing device, a plurality of key performance indicators from the plurality of derived attributes using a neural network and based on an extremely rare event being modeled; constructing, by the computing device, a linear model using the plurality of key performance indicators; and predicting, by the computing device, occurrences of the extremely rare event using the linear model.
    Type: Grant
    Filed: November 28, 2018
    Date of Patent: October 24, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Sanket Jain, Kumar Apurva, Vikram Yadav
  • Patent number: 11797844
    Abstract: Systems, methods, and computer program products to provide neural embeddings of transaction data. A network graph of transaction data based on a plurality of transactions may be received. The network graph of transaction data may define relationships between the transactions, each transaction associated with at least a merchant and an account. A neural network may be trained based on training data comprising a plurality of positive entity pairs and a plurality of negative entity pairs. An embedding function may then encode transaction data for a first new transaction. An embeddings layer of the neural network may determine a vector for the first new transaction based on the encoded transaction data for the first new transaction. A similarity between the vectors for the transactions may be determined. The first new transaction may be determined to be related to the second transaction based on the similarity.
    Type: Grant
    Filed: July 22, 2020
    Date of Patent: October 24, 2023
    Assignee: Capital One Services, LLC
    Inventors: Christopher Bruss, Keegan Hines
  • Patent number: 11790214
    Abstract: A system includes a neural network that includes a Mixture of Experts (MoE) subnetwork between a first neural network layer and a second neural network layer. The MoE subnetwork includes multiple expert neural networks. Each expert neural network is configured to process a first layer output generated by the first neural network layer to generate a respective expert output. The MoE subnetwork further includes a gating subsystem that selects, based on the first layer output, one or more of the expert neural networks and determine a respective weight for each selected expert neural network, provides the first layer output as input to each of the selected expert neural networks, combines the expert outputs generated by the selected expert neural networks in accordance with the weights for the selected expert neural networks to generate an MoE output, and provides the MoE output as input to the second neural network layer.
    Type: Grant
    Filed: May 20, 2020
    Date of Patent: October 17, 2023
    Assignee: Google LLC
    Inventors: Noam M. Shazeer, Azalia Mirhoseini, Krzysztof Stanislaw Maziarz
  • Patent number: 11790233
    Abstract: The specification describes methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating a larger neural network from a smaller neural network. One of the described methods includes obtaining data specifying an original neural network and generating a larger neural network from the original neural network. The larger neural network has a larger neural network structure than the original neural network structure. The values of the parameters of the original neural network units and the additional neural network units are initialized so that the larger neural network generates the same outputs from the same inputs as the original neural network, and the larger neural network is trained to determine trained values of the parameters of the original neural network units and the additional neural network units from the initialized values.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: October 17, 2023
    Assignee: Google LLC
    Inventors: Ian Goodfellow, Tianqi Chen, Jonathon Shlens
  • Patent number: 11783174
    Abstract: Embodiments of the present disclosure relate to splitting input data into smaller units for loading into a data buffer and neural engines in a neural processor circuit for performing neural network operations. The input data of a large size is split into slices and each slice is again split into tiles. The tile is uploaded from an external source to a data buffer inside the neural processor circuit but outside the neural engines. Each tile is again split into work units sized for storing in an input buffer circuit inside each neural engine. The input data stored in the data buffer and the input buffer circuit is reused by the neural engines to reduce re-fetching of input data. Operations of splitting the input data are performed at various components of the neural processor circuit under the management of rasterizers provided in these components.
    Type: Grant
    Filed: May 4, 2018
    Date of Patent: October 10, 2023
    Assignee: Apple Inc.
    Inventor: Christopher L. Mills
  • Patent number: 11783227
    Abstract: A method, apparatus, device and readable medium for transfer learning in machine learning are provided. The method includes: constructing a target model according to the number of classes to be achieved by a target task and a duly-trained source model; obtaining a value of a regularized loss function of the corresponding target model and a value of a cross-entropy loss function of the target model, based on sets of training data in a training dataset of the target task; according to the value of the regularized loss function and the value of the cross-entropy loss function corresponding to each set of training data, updating parameters in the target model by a gradient descent method to implement the training of the target model. The above technical solution avoids excessive constraints on parameters in the prior art, thereby refraining from damaging the training effect of the source model on the target task.
    Type: Grant
    Filed: August 20, 2020
    Date of Patent: October 10, 2023
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Xingjian Li, Haoyi Xiong, Jun Huan
  • Patent number: 11783205
    Abstract: Data is received that defines a rule mining run including a scope of a search and at least one data source to be searched. In response, the at least one data source is polled to obtain rules responsive to the rule mining run. Each rule can specify one or more actions to take as part of a computer-implemented process when certain conditions are met. A list of rules (i.e., a proposed subset of the obtained rules) can then be generated using at least one machine learning model. The generated list of rule can then be displayed in a graphical user interface. Related apparatus, systems, techniques and articles are also described.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: October 10, 2023
    Assignee: SAP SE
    Inventors: Kefeng Wang, Andreas Seifried, Birgitta Bruegel, Kieran Turley, Dimitrij Raev
  • Patent number: 11775871
    Abstract: Techniques for optimizing a machine learning model. The techniques can include: obtaining one or more embedding vectors based on a prediction of a machine learning model; mapping the embedding vectors from a higher dimensional space to a 2D/3D space to generate one or more high density points in the 2D/3D space; clustering the high-density points by running a clustering algorithm multiple times, each time with a different set of parameters to generate one or more clusters; applying a purity metric to each cluster to generate a normalized purity score of each cluster; identifying one or more clusters with a normalized purity score lower than a threshold; and optimizing the identifying one or more clusters.
    Type: Grant
    Filed: December 8, 2022
    Date of Patent: October 3, 2023
    Assignee: ARIZE AI, INC.
    Inventors: Jason Lopatecki, Aparna Dhinakaran, Francisco Castillo Carrasco, Michael Schiff, Nathaniel Mar
  • Patent number: 11763204
    Abstract: Disclosed in the embodiments of the present invention are a method and an apparatus for training an item coding model. The method comprises: acquiring an initial item coding model and a training sample set; using sample user information of training samples in the training sample set as the input for the initial item coding model to obtain the probability of sample item coding information corresponding to the inputted sample user information; adjusting the structural parameters of the initial item coding model to train an item coding model, the item coding model being used for characterizing the correspondence between inputted sample user information and sample item coding information and the correspondence between sample item information and sample item coding information. The present embodiment can use the trained item coding model to implement item recommendation and can use the item coding information as an index to increase retrieval efficiency.
    Type: Grant
    Filed: July 29, 2022
    Date of Patent: September 19, 2023
    Assignees: BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD., BYTEDANCE INC.
    Inventors: Weihao Gao, Xiangjun Fan, Jiankai Sun, Wenzhi Xiao, Chong Wang, Xiaobing Liu
  • Patent number: 11755938
    Abstract: Methods and systems for determining event probabilities and anomalous events are provided. In one implementation, a method includes: receiving source data, where the source data is configured as a plurality of events with associated timestamps; searching the source data, where the searching provides a search result including N events from the plurality of events, where N is an integer greater than one, where each event of the N events includes a plurality of field values, where at least one event of the N events can include one or more categorical field values and one or more numerical field values; and for an event of the N events, determining a probability of occurrence for each field value of the plurality of field values; and using probabilities determined for the plurality of field values, determining a probability of occurrence for the event.
    Type: Grant
    Filed: January 29, 2020
    Date of Patent: September 12, 2023
    Assignee: Splunk Inc.
    Inventors: Nghi Nguyen, Jacob Leverich, Adam Oliner