Patents Examined by Benjamin P. Geib
  • Patent number: 11972329
    Abstract: A system is provided for facilitating multi-label classification. During operation, the system maintains a set of training vectors. A respective vector represents an object and is associated with one or more labels that belong to a label set. After receiving an input vector, the system determines a similarity value between the input vector and one or more training vectors. The system further determines one or more labels associated with the input vector based on the similarity values between the input vector and the training vectors and their corresponding associated labels.
    Type: Grant
    Filed: December 31, 2018
    Date of Patent: April 30, 2024
    Assignee: Xerox Corporation
    Inventors: Hoda M. A. Eldardiry, Ryan A. Rossi
  • Patent number: 11966852
    Abstract: The present disclosure generally provides systems and methods for situation awareness. When executing a set of instructions stored in at least one non-transitory storage medium, at least one processor may be configured to cause the system to perform operations including obtaining, from at least one of one or more sensors, environmental data associated with an environment corresponding to a first time point, generating a first static global representation of an environment corresponding to the first time point based at least in part on the environmental data, generating a first dynamic global representation of the environment corresponding to the first time point based at least in part on the environmental data, and estimating, based on the first static global representation and the first dynamic global representation, a target state of the environment at a target time point using a target estimation model.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: April 23, 2024
    Assignee: SHANGHAI UNITED IMAGING INTELLIGENCE CO., LTD.
    Inventors: Ziyan Wu, Srikrishna Karanam, Lidan Wang
  • Patent number: 11960997
    Abstract: Disclosed herein are techniques for classifying data with a data processing circuit. In one embodiment, the data processing circuit includes a probabilistic circuit configurable to generate a decision at a pre-determined probability, and an output generation circuit including an output node and configured to receive input data and a weight, and generate output data at the output node for approximating a product of the input data and the weight. The generation of the output data includes propagating the weight to the output node according a first decision of the probabilistic circuit. The probabilistic circuit is configured to generate the first decision at a probability determined based on the input data.
    Type: Grant
    Filed: January 7, 2022
    Date of Patent: April 16, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Randy Huang, Ron Diamant
  • Patent number: 11954589
    Abstract: An artificial-neuron device includes an integration-generation circuit coupled between an input at which an input signal is received and an output at which an output signal is delivered, and a refractory circuit inhibiting the integrator circuit after the delivery of the output signal. The refractory circuit is formed by a first MOS transistor having a first conduction-terminal coupled to a supply node, a second conduction-terminal coupled to a common node, and a control-terminal coupled to the output, and a second MOS transistor having a first conduction-terminal coupled to the input, a second conduction-terminal coupled to a reference node at which a reference voltage is received, and a control-terminal coupled to the common node. A resistive-capacitive circuit is coupled between the supply node and the reference node and having a tap coupled to the common node, with the inhibition duration being dependent upon a time constant of the resistive-capacitive circuit.
    Type: Grant
    Filed: January 11, 2022
    Date of Patent: April 9, 2024
    Assignee: STMicroelectronics SA
    Inventors: Philippe Galy, Thomas Bedecarrats
  • Patent number: 11934945
    Abstract: Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency, such as accuracy of learning, accuracy of prediction, speed of learning, performance of learning, and energy efficiency of learning. An array of processing elements performs flow-based computations on wavelets of data. Each processing element has a respective compute element and a respective routing element. Each compute element has processing resources and memory resources. Each router enables communication via wavelets with at least nearest neighbors in a 2D mesh. Stochastic gradient descent, mini-batch gradient descent, and continuous propagation gradient descent are techniques usable to train weights of a neural network modeled by the processing elements. Reverse checkpoint is usable to reduce memory usage during the training.
    Type: Grant
    Filed: February 23, 2018
    Date of Patent: March 19, 2024
    Assignee: Cerebras Systems Inc.
    Inventors: Sean Lie, Michael Morrison, Michael Edwin James, Gary R. Lauterbach, Srikanth Arekapudi
  • Patent number: 11928603
    Abstract: Methods, computer program products, and systems are presented. The methods include, for instance: building, by one or more DNA processor, a DNA strand corresponding to a conditional expectation. The methods include, for instance: obtaining, by one or more DNA processor, a conditional expectation having a regularization metric.
    Type: Grant
    Filed: October 31, 2022
    Date of Patent: March 12, 2024
    Assignee: Kyndryl, Inc.
    Inventors: Gary F. Diamanti, Aaron K. Baughman, Mauro Marzorati
  • Patent number: 11922311
    Abstract: A computing device trains a fair prediction model. A prediction model is trained and executed with observation vectors. A weight value is computed for each observation vector based on whether the predicted target variable value of a respective observation vector of the plurality of observation vectors has a predefined target event value. An observation vector is relabeled based on the computed weight value. The prediction model is retrained with each observation vector weighted by a respective computed weight value and with the target variable value of any observation vector that was relabeled. The retrained prediction model is executed. A conditional moments matrix is computed. A constraint violation matrix is computed. Computing the weight value through computing the constraint violation matrix is repeated until a stop criterion indicates retraining of the prediction model is complete. The retrained prediction model is output.
    Type: Grant
    Filed: June 12, 2023
    Date of Patent: March 5, 2024
    Assignee: SAS Institute Inc.
    Inventors: Xinmin Wu, Ricky Dee Tharrington, Jr., Ralph Walter Abbey
  • Patent number: 11915139
    Abstract: Methods, systems, and apparatus for updating machine learning models to improve locality are described. In one aspect, a method includes receiving data of a machine learning model. The data represents operations of the machine learning model and data dependencies between the operations. Data specifying characteristics of a memory hierarchy for a machine learning processor on which the machine learning model is going to be deployed is received. The memory hierarchy includes multiple memories at multiple memory levels for storing machine learning data used by the machine learning processor when performing machine learning computations using the machine learning model. An updated machine learning model is generated by modifying the operations and control dependencies of the machine learning model to account for the characteristics of the memory hierarchy. Machine learning computations are performed using the updated machine learning model.
    Type: Grant
    Filed: February 15, 2022
    Date of Patent: February 27, 2024
    Assignee: Google LLC
    Inventors: Doe Hyun Yoon, Nishant Patil, Norman Paul Jouppi
  • Patent number: 11847536
    Abstract: A method, system and computer readable medium for performing a cognitive browse operation comprising: receiving training data, the training data comprising information based upon user interaction with cognitive attributes; performing a machine learning operation on the training data; generating a cognitive profile based upon the information generated by performing the machine learning operation; and, performing a cognitive browse operation on a corpus of content based upon the cognitive profile, the cognitive browse operation returning cognitive browse results specific to the cognitive profile of the user.
    Type: Grant
    Filed: September 25, 2017
    Date of Patent: December 19, 2023
    Assignee: Tecnotree Technologies, Inc.
    Inventors: Neeraj Chawla, Matthew Sanchez, Andrea M. Ricaurte, Dilum Ranatunga, Ayan Acharya, Hannah R. Lindsley
  • Patent number: 11842199
    Abstract: An asynchronous pipeline includes a first stage and one or more second stages. A controller provides control signals to the first stage to indicate a modification to an operating speed of the first stage. The modification is determined based on a comparison of a completion status of the first stage to one or more completion statuses of the one or more second stages. In some cases, the controller provides control signals indicating modifications to an operating voltage applied to the first stage and a drive strength of a buffer in the first stage. Modules can be used to determine the completion statuses of the first stage and the one or more second stages based on the monitored output signals generated by the stages, output signals from replica critical paths associated with the stages, or a lookup table that indicates estimated completion times.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: December 12, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Greg Sadowski, John Kalamatianos, Shomit N. Das
  • Patent number: 11823032
    Abstract: A method for tuning the conductance of a molecular network includes a network of covalently bound molecular units, which are molecular entities assembled so as to form a network that can typically be compared to a finite, imperfect 2D crystal. Each of the molecular entities includes: a branching junction; M branches (M?3) branching from said branching junction, where each of the M branches comprises an aliphatic group; and M linkers, each terminating a respective one of the M branches. Each of the M linkers is covalently bound to a linker of another molecular entity of the network. The method involves tuning the electrical conductance of molecular entities of a subset of the molecular entities of the network, in one or several (e.g., parallel or successive) steps.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: November 21, 2023
    Assignee: International Business Machines Corporation
    Inventors: Leo Gross, Shadi Fatayer, Florian Albrecht, Fabian Schulz, Katharina Kaiser
  • Patent number: 11809980
    Abstract: Aspects of the present disclosure provide techniques for automated data classification through machine learning. Embodiments include providing first inputs to a first machine learning model based on a column header of a column from a table and receiving a first output from the first machine learning model in response to the first inputs, wherein the first output indicates a first likelihood that the column relates to a given classification. Embodiments include providing second inputs to a second machine learning model based on a value from the column and receiving a second output from the second machine learning model in response to the second inputs, wherein the second output indicates a second likelihood that the value relates to the given classification. Embodiments include determining whether to associate the value with the given classification based on the first output and the second output.
    Type: Grant
    Filed: April 28, 2023
    Date of Patent: November 7, 2023
    Assignee: INTUIT, INC.
    Inventor: Vignesh Radhakrishnan
  • Patent number: 11810011
    Abstract: A model including a plurality of intermediate operations may be identified. An intermediate operation of the plurality of intermediate operations may be selected to provide output values associated with the plurality of intermediate operations during execution of the model. A first output value associated with the intermediate operation may be received during a first execution of the model. A second output value associated with the intermediate operation may be received during a second execution of the model. A difference between the first output value and the second output value may be determined to satisfy a threshold. A notification may be transmitted that indicates data drift of the model in view of the difference between the first output value and the second output value satisfying the threshold.
    Type: Grant
    Filed: January 2, 2018
    Date of Patent: November 7, 2023
    Assignee: Red Hat, Inc.
    Inventors: William Benton, Erik Erlandson
  • Patent number: 11803752
    Abstract: Implementations of the present specification provide a model-based prediction method and apparatus. The method includes: a model running environment receives an input tensor of a machine learning model; the model running environment sends a table query request to an embedding running environment, the table query request including the input tensor, to request low-dimensional conversion of the input tensor; the model running environment receives a table query result returned by the embedding running environment, the table query result being obtained by the embedding running environment by performing embedding query and processing based on the input tensor; and the model running environment inputs the table query result into the machine learning model, and runs the machine learning model to complete model-based prediction.
    Type: Grant
    Filed: February 2, 2021
    Date of Patent: October 31, 2023
    Assignee: Advanced New Technologies Co., Ltd.
    Inventors: Yongchao Liu, Sizhong Li, Guozhen Pan, Jianguo Xu, Qiyin Huang
  • Patent number: 11790036
    Abstract: A computing device trains a fair machine learning model. A predicted target variable is defined using a trained prediction model. The prediction model is trained with weighted observation vectors. The predicted target variable is updated using the prediction model trained with weighted observation vectors. A true conditional moments matrix and a false conditional moments matrix are computed. The training and updating with weighted observation vectors are repeated until a number of iterations is performed. When a computed conditional moments matrix indicates to adjust a bound value, the bound value is updated based on an upper bound value or a lower bound value, and the repeated training and updating with weighted observation vectors is repeated with the bound value replaced with the updated bound value until the conditional moments matrix indicates no further adjustment of the bound value is needed. A fair prediction model is trained with the updated bound value.
    Type: Grant
    Filed: November 2, 2022
    Date of Patent: October 17, 2023
    Assignee: SAS Institute Inc.
    Inventors: Xinmin Wu, Xin Jiang Hunt, Ralph Walter Abbey
  • Patent number: 11790280
    Abstract: The invention provides methods for computing with chemicals by encoding digital data into a plurality of chemicals to obtain a dataset; translating the dataset into a chemical form; reading the data set; querying the dataset by performing an operation to obtain a perceptron; and analyzing the perceptron for identifying chemical structure and/or concentration of at least one of the chemicals, thereby developing a chemical computational language. The invention demonstrates a workflow for representing abstract data in synthetic metabolomes. Also presented are several demonstrations of kilobyte-scale image data sets stored in synthetic metabolomes, recovered at >99% accuracy.
    Type: Grant
    Filed: July 19, 2021
    Date of Patent: October 17, 2023
    Assignee: BROWN UNIVERSITY
    Inventors: Brenda Rubenstein, Jacob Karl Rosenstein, Christopher Arcadia, Shui Ling Chen, Amanda Doris Dombroski, Joseph D. Geiser, Eamonn Kennedy, Eunsuk Kim, Kady M. Oakley, Sherief Reda, Christopher Rose, Jason Kelby Sello, Hokchhay Tann, Peter Weber
  • Patent number: 11790235
    Abstract: Computer systems and methods modify a base deep neural network (DNN). The method comprises replacing the target node of the base DNN with a compound node to thereby create a modified base DNN. The compound node comprises at least first and second nodes. The first node is trained to detect target node patterns in inputs to the first node and the second node is trained to detect an absence of the target node patterns in inputs to the second node, and the first and second nodes are trained to be non-complementary.
    Type: Grant
    Filed: December 28, 2022
    Date of Patent: October 17, 2023
    Assignee: D5AI LLC
    Inventor: James K. Baker
  • Patent number: 11790221
    Abstract: Many of the features of neural networks for machine learning can naturally be mapped into the quantum optical domain by introducing the quantum optical neural network (QONN). A QONN can be performed to perform a range of quantum information processing tasks, including newly developed protocols for quantum optical state compression, reinforcement learning, black-box quantum simulation and one way quantum repeaters. A QONN can generalize from only a small set of training data onto previously unseen inputs. Simulations indicate that QONNs are a powerful design tool for quantum optical systems and, leveraging advances in integrated quantum photonics, a promising architecture for next generation quantum processors.
    Type: Grant
    Filed: March 23, 2020
    Date of Patent: October 17, 2023
    Assignee: Massachusetts Institute of Technology
    Inventors: Jacques Johannes Carolan, Gregory R. Steinbrecher, Dirk Robert Englund
  • Patent number: 11783168
    Abstract: Disclosed are a network accuracy quantification method, system, and device, an electronic device and a readable medium, which are applicable to a many-core chip. The method includes: determining a reference accuracy according to a total core resource number of the many-core chip and the number of core resources required by each network to be quantified, with the number of the core resources required by each network to be quantified being the number of the core resources which is determined after each network to be quantified is quantified; and determining a target accuracy corresponding to each network to be quantified according to the reference accuracy and the total core resource number of the many-core chip.
    Type: Grant
    Filed: June 9, 2021
    Date of Patent: October 10, 2023
    Assignee: LYNXI TECHNOLOGIES CO., LTD.
    Inventors: Fanhui Meng, Chuan Hu, Han Li, Xinyang Wu, Yaolong Zhu
  • Patent number: 11740898
    Abstract: The present disclosure provides a computation device. The computation device is configured to perform a machine learning computation, and includes an operation unit, a controller unit, and a conversion unit. The storage unit is configured to obtain input data and a computation instruction. The controller unit is configured to extract and parse the computation instruction from the storage unit to obtain one or more operation instructions, and to send the one or more operation instructions and the input data to the operation unit. The operation unit is configured to perform operations on the input data according to one or more operation instructions to obtain a computation result of the computation instruction. In the examples of the present disclosure, the input data involved in machine learning computations is represented by fixed-point data, thereby improving the processing speed and efficiency of training operations.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: August 29, 2023
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD
    Inventors: Yao Zhang, Bingrui Wang