Patents Examined by Brent Johnston Hoover
  • Patent number: 11676078
    Abstract: A predictor has a memory which stores at least one example for which an associated outcome is not known. The memory stores at least one decision tree comprising a plurality of nodes connected by edges, the nodes comprising a root node, internal nodes and leaf nodes. Individual ones of the nodes and individual ones of the edges each have an assigned module, comprising parameterized, differentiable operations, such that for each of the internal nodes the module computes a binary outcome for selecting a child node of the internal node. The predictor has a processor configured to compute the prediction by processing the example using a plurality of the differentiable operations selected according to a path through the tree from the root node to a leaf node.
    Type: Grant
    Filed: July 23, 2018
    Date of Patent: June 13, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Aditya Vithal Nori, Antonio Criminisi, Ryutaro Tanno
  • Patent number: 11657257
    Abstract: Disclosed herein are system, method, and computer program product embodiments for an improved spiking neural network (SNN) configured to learn and perform unsupervised extraction of features from an input stream. An embodiment operates by receiving a set of spike bits corresponding to a set synapses associated with a spiking neuron circuit. The embodiment applies a first logical AND function to a first spike bit in the set of spike bits and a first synaptic weight of a first synapse in the set of synapses. The embodiment increments a membrane potential value associated with the spiking neuron circuit based on the applying. The embodiment determines that the membrane potential value associated with the spiking neuron circuit reached a learning threshold value. The embodiment then performs a Spike Time Dependent Plasticity (STDP) learning function based on the determination that the membrane potential value of the spiking neuron circuit reached the learning threshold value.
    Type: Grant
    Filed: October 7, 2022
    Date of Patent: May 23, 2023
    Assignee: BrainChip, Inc.
    Inventors: Peter Aj Van Der Made, Anil Shamrao Mankar
  • Patent number: 11657253
    Abstract: For a content item with unknown tasks performed by a viewing user on an online system, the online system predicts a likelihood of interacting with each content item using a prediction model associated with a plurality of tasks. The prediction model comprises a plurality of independent layers, a plurality of shared layers and a plurality of separate layers. Each independent layer is configured to extract features, for each task, that are not shared across the plurality of tasks. The plurality of shared layers are configured to extract common features that are shared across the plurality of tasks. Each separate layer is configured to predict likelihood of the viewing user performing a task associated with the separate layer based on the features extracted from the plurality of independent layers and the plurality of shared layers.
    Type: Grant
    Filed: April 22, 2021
    Date of Patent: May 23, 2023
    Assignee: Meta Platforms, Inc.
    Inventors: Liang Xiong, Yan Zhu
  • Patent number: 11651216
    Abstract: An exemplary model search may provide optimal explainable models based on a dataset. An exemplary embodiment may identify features from a training dataset, and may map feature costs to the identified features. The search space may be sampled to generate initial or seed candidates, which may be chosen based on one or more objectives and/or constraints. The candidates may be iteratively optimized until an exit condition is met. The optimization may be performed by an external optimizer. The external optimizer may iteratively apply constraints to the candidates to quantify a fitness level of each of the seed candidates. The fitness level may be based on the constraints and objectives. The candidates may be a set of data, or may be trained to form explainable models. The external optimizer may optimize the explainable models until the exit conditions are met.
    Type: Grant
    Filed: June 9, 2022
    Date of Patent: May 16, 2023
    Assignee: UMNAI Limited
    Inventors: Angelo Dalli, Kristian D'Amato, Mauro Pirrone
  • Patent number: 11640550
    Abstract: The disclosure discloses a method and apparatus for updating a deep learning model. An embodiment of the method comprises: executing following updating: acquiring a training dataset under a preset path, training a preset deep learning model based on the training dataset to obtain a new deep learning model; updating the preset deep learning model to the new deep learning model; increasing training iterations; determining whether a number of training iterations reaches a threshold of training iterations; stopping executing the updating if the number of training iterations reaches the threshold of training iterations; and continuing to execute the updating after an interval of a preset time length if the number of training iterations fails to reach the threshold of training iterations. This embodiment has improved the model updating efficiency.
    Type: Grant
    Filed: July 3, 2018
    Date of Patent: May 2, 2023
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Lan Liu, Faen Zhang, Kai Zhou, Qian Wang, Kun Liu, Yuanhao Xiao, Dongze Xu, Tianhan Xu, Jiayuan Sun
  • Patent number: 11636370
    Abstract: A hybrid quantum-classical (HQC) computer which includes both a classical computer component and a quantum computer component performs generative learning on continuous data distributions. The HQC computer is capable of being implemented using existing and near-term quantum computer components having relatively low circuit depth.
    Type: Grant
    Filed: October 11, 2019
    Date of Patent: April 25, 2023
    Assignee: Zapata Computing, Inc.
    Inventors: Jhonathan Romero, Alan Aspuru-Guzik
  • Patent number: 11630990
    Abstract: The present disclosure provides systems, methods and computer-readable media for optimizing the neural architecture search for the automated machine learning process. In one aspect, neural architecture search method including selecting a neural architecture for training as part of an automated machine learning process; collecting statistical parameters on individual nodes of the neural architecture during the training; determining, based on the statistical parameters, active nodes of the neural architecture to form a candidate neural architecture; and validating the candidate neural architecture to produce a trained neural architecture to be used in implemented an application or a service.
    Type: Grant
    Filed: March 19, 2019
    Date of Patent: April 18, 2023
    Assignee: Cisco Technology, Inc.
    Inventors: Abhishek Singh, Debojyoti Dutta
  • Patent number: 11630851
    Abstract: The embodiments set forth techniques for implementing various “prediction engines” that can be configured to provide different kinds of predictions within a mobile computing device. According to some embodiments, each prediction engine can assign itself as an “expert” on one or more “prediction categories” within the mobile computing device. When a software application issues a request for a prediction for a particular category, and two or more prediction engines respond with their respective prediction(s), a “prediction center” can be configured to receive and process the predictions prior to responding to the request. Processing the predictions can involve removing duplicate information that exists across the predictions, sorting the predictions in accordance with confidence levels advertised by the prediction engines, and the like. In this manner, the prediction center can distill multiple predictions down into an optimized prediction and provide the optimized prediction to the software application.
    Type: Grant
    Filed: July 2, 2019
    Date of Patent: April 18, 2023
    Assignee: Apple Inc.
    Inventors: Joao Pedro Lacerda, Gaurav Kapoor
  • Patent number: 11625579
    Abstract: A spiking neural network device according to an embodiment includes a synaptic element, a neuron circuit, a synaptic potentiator, and a synaptic depressor. The synaptic element has a variable weight. The neuron circuit inputs a spike voltage having a magnitude adjusted in accordance with the weight of the synaptic element via the synaptic element, and fires when a predetermined condition is satisfied. The synaptic potentiator performs a potentiating operation for potentiating the weight of the synaptic element depending on input timing of the spike voltage and firing timing of the neuron circuit. The synaptic depressor performs a depression operation for depressing the weight of the synaptic element in accordance with a schedule independent from the input timing of the spike voltage and the firing timing of the neuron circuit.
    Type: Grant
    Filed: February 27, 2020
    Date of Patent: April 11, 2023
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Yoshifumi Nishi, Kumiko Nomura, Radu Berdan, Takao Marukame
  • Patent number: 11604993
    Abstract: Techniques for training a computationally-expensive layer, such as a convolutional layer, of a machine-learning model toward a target filter. If the training drives parameters associated with the layer to match or be close enough to the target filter, the layer may be removed, replace, and/or reduced in size, depending on the type of target filter used.
    Type: Grant
    Filed: May 3, 2019
    Date of Patent: March 14, 2023
    Assignee: Zoox, Inc.
    Inventor: Qijun Tan
  • Patent number: 11593643
    Abstract: A quaternion deep neural network (QTDNN) includes a plurality of modular hidden layers, each comprising a set of QT computation sublayers, including a quaternion (QT) general matrix multiplication sublayer, a QT non-linear activations sublayer, and a QT sampling sublayer arranged along a forward signal propagation path. Each QT computation sublayer of the set has a plurality of QT computation engines. In each modular hidden layer, a steering sublayer precedes each of the QT computation sublayers along the forward signal propagation path. The steering sublayer directs a forward-propagating quaternion-valued signal to a selected at least one QT computation engine of a next QT computation subsequent sublayer.
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: February 28, 2023
    Assignee: Intel Corporation
    Inventors: Monica Lucia Martinez-Canales, Sudhir K. Singh, Vinod Sharma, Malini Krishnan Bhandaru
  • Patent number: 11586929
    Abstract: This disclosure relates to method and system for optimizing memory requirement for training an artificial neural network (ANN) model employed for natural language processing (NLP). In one embodiment, the method may include receiving a plurality of training parameters and a plurality of model parameters, selecting a set of model parameters from among the plurality of model parameters for training the ANN model based on a characteristic and an architecture of the ANN model, masking the set of model parameters in one or more layers of the ANN model based on a set of pre-defined rules to generate a set of masked model parameters, determining an amount of memory required for training the ANN model based on the set of masked model parameters, and providing the set of masked model parameters for training the ANN model when the amount of memory required is less than a determined threshold.
    Type: Grant
    Filed: March 30, 2019
    Date of Patent: February 21, 2023
    Assignee: Wipro Limited
    Inventors: Rishav Das, Sourav Mudi
  • Patent number: 11580399
    Abstract: An electronic device, method, and computer readable medium for 3D association of detected objects are provided. The electronic device includes a memory and at least one processor coupled to the memory. The at least one processor configured to convolve an input to a neural network with a basis kernel to generate a convolution result, scale the convolution result by a scalar to create a scaled convolution result, and combine the scaled convolution result with one or more of a plurality of scaled convolution results to generate an output feature map.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: February 14, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Chenchi Luo, Manish Goel, David Liu
  • Patent number: 11580406
    Abstract: Provided is an artificial neural network learning apparatus for deep learning. The apparatus includes an input unit configured to acquire an input data or a training data, a memory configured to store the input data, the training data, and a deep learning artificial neural network model, and a processor configured to perform computation based on the artificial neural network model, in which the processor sets the initial weight depending on the number of nodes belonging to a first layer and the number of nodes belonging to a second layer of the artificial neural network model, and determines the initial weight by compensation by multiplying a standard deviation (?) by a square root of a reciprocal of a probability of a normal probability distribution for a remaining section except for a section in which an output value of the activation function converges to a specific value.
    Type: Grant
    Filed: December 30, 2019
    Date of Patent: February 14, 2023
    Assignee: Markany Inc.
    Inventors: Seung Yeob Chae, So Won Kim, Min Soo Park
  • Patent number: 11574194
    Abstract: A computing device includes one or more processors, random access memory (RAM), and a non-transitory computer-readable storage medium storing instructions for execution by the one or more processors. The computing device receives first data on which to train a neural network comprising at least one quantized layer and performs a set of training iterations to train weights for the neural network. Each training iteration of the set of training iterations includes stochastically writing values to the random access memory for a set of activations of the at least one quantized layer of the neural network using first write parameters corresponding to a first write error rate. The computing device stores trained values for the weights of the neural network. The trained neural network is configured to classify second data based on the stored values.
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: February 7, 2023
    Assignee: Integrated Silicon Solution, (Cayman) Inc.
    Inventor: Michail Tzoufras
  • Patent number: 11574205
    Abstract: Provided are techniques for unified cognition for a virtual personal cognitive assistant. A personal cognitive agent creates an association with an entity and a personalized embodied cognition manager that includes an entity agent registry, wherein the personal cognitive agent comprises a virtual personal cognitive assistant. Selection of a first cognitive assistant agent from a first domain and a second cognitive assistant agent from a second domain are received. Input from the entity is received. A goal based on the input is identified. Unified cognition is provided by coordinating the first cognitive assistant agent of the first domain and the second cognitive assistant agent of the second domain to generate one or more actions to meet the goal. A response is provided to the input with an indication of the goal.
    Type: Grant
    Filed: May 2, 2019
    Date of Patent: February 7, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Joanna W. Ng, Ernest Grady Booch
  • Patent number: 11564870
    Abstract: There is provided a method comprising: computing a target nutritional goal to reach at an end of a time interval based on a real time energy expenditure of a patient, wherein the target nutritional goal comprises a volume to be delivered (VTBD) by the end of the time interval corresponding to a target amount of energy expenditure of the patient over the time interval, computing a target feeding profile defining a target feeding rate for enteral feeding of the patient for reaching the VTBD by the end of the time interval, continuously monitoring the real time energy expenditure, adapting the target nutritional goal and corresponding VTBD to compute a maximum VTBD to reach at the end of the time interval according to the monitoring, and dynamically adapting the target feeding rate and the corresponding target feeding profile for a remaining portion of the time interval for reaching the maximum VTBD.
    Type: Grant
    Filed: February 16, 2022
    Date of Patent: January 31, 2023
    Assignee: ART MEDICAL Ltd.
    Inventors: Liron Elia, Gavriel J. Iddan
  • Patent number: 11568264
    Abstract: Provided are an analysis device, an analysis method, and an analysis program which predict the performance of a learning model when learning processing is executed using multiple algorithms. Using a predictive model produced by supervised learning using first shape information representing a global shape of a first loss function set for a prescribed problem and the performance of the learning model as learning data, an analysis device 10 predicts, for each of the multiple algorithms, the performance of a learning model when machine learning by the learning model is executed so that a second loss function has a reduced value on the basis of second shape information representing a global shape of the second loss function set for a new problem.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: January 31, 2023
    Inventor: Nozomu Kubota
  • Patent number: 11562210
    Abstract: A processor holds the second change value of the energy value which has been calculated by the processor and corresponds to each of a predetermined number of state transitions, in entries corresponding to the input identification information k. When the processor stochastically determines whether or not any of the state transitions is accepted, by a relative relation between the first change value and thermal excitation energy based on the temperature value, the first change value of the energy value calculated by the processor, and a random number value, the processor stochastically determines whether or not any of the state transitions is accepted, by adding the offset value y to the first change value. The offset value y is obtained by multiplying the second change value held by any entry selected from the entries based on the input identification information k, by coefficient information ?.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: January 24, 2023
    Assignee: FUJITSU LIMITED
    Inventor: Noboru Yoneoka
  • Patent number: 11551284
    Abstract: A session-based recommendation method and device according to one or more embodiments of this disclosure are provided, which use a pre-trained recommendation model to perform item recommend. The method includes following contents: a directed session graph is constructed according to a session to be predicted; the directed session graph is then input into a gated graph neural network which outputs the item embedding vector; a user's dynamic preference is determined according to a user's current preference and a first long-term preference, the current preference is an item embedding vector of a last item in the session and the first long-term preference is determined according to the item embedding vector and an importance score of the item; a prediction score of a respective item is determined according to the dynamic preference and the item embedding vector; and a recommended item is output according to the prediction score of the respective item.
    Type: Grant
    Filed: May 16, 2022
    Date of Patent: January 10, 2023
    Assignee: National University of Defense Technology
    Inventors: Fei Cai, Chengyu Song, Yitong Wang, Zhiqiang Pan, Xin Zhang, Mengru Wang, Wanyu Chen, Honghui Chen