Patents Examined by Vincent Gonzales
  • Patent number: 11315006
    Abstract: A method for inferring patterns in multi-dimensional image data comprises providing a recursive network of sub-networks with a parent feature node and at least two child feature nodes; wherein each sub-network is associated with a distinct subset of the space; configuring nodes of the sub-networks with posterior distribution component; receiving image data feature input at the final child feature nodes; propagating node activation through the network layer hierarchy in a manner consistent with node connections of sub-networks of the network and the posterior prediction of child nodes; and outputting parent feature node selection to an inferred output.
    Type: Grant
    Filed: March 17, 2017
    Date of Patent: April 26, 2022
    Assignee: Vicarious FPC, Inc.
    Inventors: Dileep George, Kenneth Kansky, D. Scott Phoenix, Bhaskara Marthi, Christopher Laan, Wolfgang Lehrach
  • Patent number: 11315152
    Abstract: A method and system for product recommendation. The method includes: defining, by a computing device, a hierarchical Bayesian model having a latent factor; training, by the computing device, the hierarchical Bayesian model using a plurality of training events to obtain a trained hierarchical Bayesian model, each event comprising feature of a product, brand of the product, feature of a user, and action of the user upon the product; predicting, by the computing device, a possibility a target user performing an action on a target product using the trained hierarchical Bayesian model; and providing product recommendation to the target user based on the possibility.
    Type: Grant
    Filed: February 13, 2019
    Date of Patent: April 26, 2022
    Assignees: BEIJING JINGDONG SHANGKE INFORMATION TECHNOLOGY Co., Ltd., JD.COM AMERICAN TECHNOLOGIES CORPORATION
    Inventors: Zhexuan Xu, Yongjun Bao
  • Patent number: 11295201
    Abstract: Embodiments of the invention relate to a time-division multiplexed neurosynaptic module with implicit memory addressing for implementing a neural network. One embodiment comprises maintaining neuron attributes for multiple neurons and maintaining incoming firing events for different time steps. For each time step, incoming firing events for said time step are integrated in a time-division multiplexing manner. Incoming firing events are integrated based on the neuron attributes maintained. For each time step, the neuron attributes maintained are updated in parallel based on the integrated incoming firing events for said time step.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: April 5, 2022
    Assignee: International Business Machines Corporation
    Inventors: John V. Arthur, Bernard V. Brezzo, Leland Chang, Daniel J. Friedman, Paul A. Merolla, Dharmendra S. Modha, Robert K. Montoye, Jae-sun Seo, Jose A. Tierno
  • Patent number: 11295202
    Abstract: An apparatus comprises a mass storage unit and a plurality of circuit modules including a machine learning module, a programmable state machine module, and input/output interfaces. Switching circuitry is configured to selectively couple the circuit modules. Configuration circuitry is configured to access configuration data from the mass storage unit and to operate the switching circuitry to connect the circuit modules according to the configuration data.
    Type: Grant
    Filed: February 19, 2015
    Date of Patent: April 5, 2022
    Assignee: Seagate Technology LLC
    Inventors: Jon Trantham, Kevin Arthur Gomez, Frank Dropps, Antoine Khoueir, Scott Younger
  • Patent number: 11288595
    Abstract: The system presented here can create a new machine learning model by improving and combining existing machine learning models in a modular way. By combining existing machine learning models, the system can avoid the step of training a new machine model. Further, by combining existing machine models in a modular way, the system can selectively train only a module, i.e. a part, of the new machine learning model. Using the disclosed system, the expensive steps of gathering 8 TB of data and using the data to train the new machine learning model over 16,000 processors for three days can be entirely avoided, or can be reduced by a half, a third, etc. depending on the size of the module requiring training.
    Type: Grant
    Filed: February 13, 2018
    Date of Patent: March 29, 2022
    Assignee: Groq, Inc.
    Inventors: Jonathan Alexander Ross, Douglas Wightman
  • Patent number: 11281966
    Abstract: A circuit for performing neural network computations for a neural network, the circuit comprising: a systolic array comprising a plurality of cells; a weight fetcher unit configured to, for each of the plurality of neural network layers: send, for the neural network layer, a plurality of weight inputs to cells along a first dimension of the systolic array; and a plurality of weight sequencer units, each weight sequencer unit coupled to a distinct cell along the first dimension of the systolic array, the plurality of weight sequencer units configured to, for each of the plurality of neural network layers: shift, for the neural network layer, the plurality of weight inputs to cells along the second dimension of the systolic array over a plurality of clock cycles and where each cell is configured to compute a product of an activation input and a respective weight input using multiplication circuitry.
    Type: Grant
    Filed: August 2, 2018
    Date of Patent: March 22, 2022
    Assignee: Google LLC
    Inventor: Jonathan Ross
  • Patent number: 11270230
    Abstract: Provided are a system and methodology for iteratively transforming data as between multiple sets thereof. Doing so, via normalization of the data, enables uniformity of interpretation and presentation of the data no matter the machine learning model that produced the data.
    Type: Grant
    Filed: April 12, 2021
    Date of Patent: March 8, 2022
    Assignee: Socure, Inc.
    Inventors: Pablo Y. Abreu, Omar Gutierrez, Ali Haddad, Stanislav Palatnik
  • Patent number: 11269974
    Abstract: Embodiments of the present invention provide a divide-and-conquer algorithm which divides expanded data into a cluster of machines. Each portion of data is used to train logistic classification models in parallel, and then combined at the end of the training phase to create a single ordinal model. The training scheme removes the need for synchronization between the parallel learning algorithms during the training period, making training on large datasets technically feasible without the use of supercomputers or computers with specific processing capabilities. Embodiments of the present invention also provide improved estimation and prediction performance of the model learned compared to the existing techniques for training models with large datasets.
    Type: Grant
    Filed: October 31, 2017
    Date of Patent: March 8, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Sougata Chaudhuri, Lu Tang, Abraham Hossain Bagherjeiran
  • Patent number: 11263539
    Abstract: A method and system for distributed machine learning and model training are disclosed. In particular, a finite asynchronous parallel training scheme is described. The finite asynchronous parallel training takes advantage of the benefits of both asynchronous parallel training and synchronous parallel training. The computation delays in various distributed computation nodes are further considered when training parameter are updated during each round of iterative training. The disclosed method and system facilities increase of model training speed and efficiency.
    Type: Grant
    Filed: February 4, 2019
    Date of Patent: March 1, 2022
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Jiawei Jiang, Bin Cui, Ming Huang, Pin Xiao, Benlong Hu, Lele Yu
  • Patent number: 11263512
    Abstract: A novel and useful neural network (NN) processing core adapted to implement artificial neural networks (ANNs) and incorporating strictly separate control and data planes. The NN processor is constructed from self-contained computational units organized in a hierarchical architecture. The homogeneity enables simpler management and control of similar computational units, aggregated in multiple levels of hierarchy. Computational units are designed with minimal overhead as possible, where additional features and capabilities are aggregated at higher levels in the hierarchy. On-chip memory provides storage for content inherently required for basic operation at a particular hierarchy and is coupled with the computational resources in an optimal ratio. Lean control provides just enough signaling to manage only the operations required at a particular hierarchical level. Dynamic resource assignment agility is provided which can be adjusted as required depending on resource availability and capacity of the device.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: March 1, 2022
    Inventors: Avi Baum, Or Danon, Hadar Zeitlin, Daniel Ciubotariu, Rami Feig
  • Patent number: 11257005
    Abstract: A training method and a training system for a machine learning system are provided.
    Type: Grant
    Filed: August 31, 2018
    Date of Patent: February 22, 2022
    Inventor: Jun Zhou
  • Patent number: 11257003
    Abstract: The schematic flow chart diagrams included herein are generally set forth as logical flow chart diagrams. As such, the depicted order and labeled steps are indicative of one embodiment of the presented method. Other steps and methods may be conceived that are equivalent in function, logic, or effect to one or more steps, or portions thereof, of the illustrated method. Additionally, the format and symbols employed are provided to explain the logical steps of the method and are understood not to limit the scope of the method. Although various arrow types and line types may be employed in the flow chart diagrams, and they are understood not to limit the scope of the corresponding method.
    Type: Grant
    Filed: May 9, 2018
    Date of Patent: February 22, 2022
    Inventors: Kalpit Jain, Neil Reddy Chintala, Zhi Feng Huang, Kaushal Mehta, Alok Nandan
  • Patent number: 11243994
    Abstract: By formulizing a specific company's internal knowledge and terminology, the ontology programming accounts for linguistic meaning to surface relevant and important content for analysis. The ontology is built on the premise that meaningful terms are detected in the corpus and then classified according to specific semantic concepts, or entities. Once the main terms are defined, direct relations or linkages can be formed between these terms and their associated entities. Then, the relations are grouped into themes, which are groups or abstracts that contain synonymous relations. The disclosed ontology programming adapts to the language used in a specific domain, including linguistic patterns and properties, such as word order, relationships between terms, and syntactical variations. The ontology programming automatically trains itself to understand the domain or environment of the communication data by processing and analyzing a defined corpus of communication data.
    Type: Grant
    Filed: January 9, 2019
    Date of Patent: February 8, 2022
    Assignee: Verint Systems Ltd
    Inventor: Roni Romano
  • Patent number: 11245665
    Abstract: Methods are taught for creating training data for a learning algorithm, training the learning algorithm with the training data and using the trained learning algorithm to suggest domain names to users. A domain name registrar may store activities of a user on a registrar website. Preferably, domain name searches, selected suggested domain names and domain names registered to the user are stored as the training data in a training database. The training data may be stored so that earlier activities act as inputs to the learning algorithm while later activities are the expected outputs of the learning algorithm. Once trained, the learning algorithm may receive activities of other users and suggest domain names to the other users based on their activities.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: February 8, 2022
    Assignee: Go Daddy Operating Company, LLC
    Inventors: Wei-Cheng Lai, Yu Tian, Wenbo Wang, Chungwei Yen
  • Patent number: 11244242
    Abstract: Systems, apparatuses, methods, and computer-readable media, are provided for distributed machine learning (ML) training using heterogeneous compute nodes in a heterogeneous computing environment, where the heterogeneous compute nodes are connected to a master node via respective wireless links. ML computations are performed by individual heterogeneous compute nodes on respective training datasets, and a master combines the outputs of the ML computations obtained from individual heterogeneous compute nodes. The ML computations are balanced across the heterogeneous compute nodes based on knowledge of network conditions and operational constraints experienced by the heterogeneous compute nodes. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: February 8, 2022
    Assignee: Intel Corporation
    Inventors: Saurav Prakash, Sagar Dhakal, Yair Yona, Nageen Himayat, Shilpa Talwar
  • Patent number: 11227216
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating a respective neural network output for each of a plurality of inputs, the method comprising, for each of the neural network layers: receiving a plurality of inputs to be processed at the neural network layer; forming one or more batches of inputs from the plurality of inputs, each batch having a number of inputs up to the respective batch size for the neural network layer; selecting a number of the one or more batches of inputs to process, where a count of the inputs in the number of the one or more batches is greater than or equal to the respective associated batch size of a subsequent layer in the sequence; and processing the number of the one or more batches of inputs to generate the respective neural network layer output.
    Type: Grant
    Filed: April 9, 2021
    Date of Patent: January 18, 2022
    Assignee: Google LLC
    Inventor: Reginald Clifford Young
  • Patent number: 11216726
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating a respective neural network output for each of a plurality of inputs, the method comprising, for each of the neural network layers: receiving a plurality of inputs to be processed at the neural network layer; forming one or more batches of inputs from the plurality of inputs, each batch having a number of inputs up to the respective batch size for the neural network layer; selecting a number of the one or more batches of inputs to process, where a count of the inputs in the number of the one or more batches is greater than or equal to the respective associated batch size of a subsequent layer in the sequence; and processing the number of the one or more batches of inputs to generate the respective neural network layer output.
    Type: Grant
    Filed: September 24, 2018
    Date of Patent: January 4, 2022
    Assignee: Google LLC
    Inventor: Reginald Clifford Young
  • Patent number: 11210587
    Abstract: In a network discovery and management system, a machine learning (ML) DLAD processor trains, validates, updates, and stores machine learning models. A ML training data preparation program performs operations to process and format input data to generate ML training data that can be used to train ML models. ML training program uses the ML training data to train ML models, thereby generating trained ML models. The ML training program can re-train or update the training of ML models as the system collects additional data and produces additional estimates, predictions, and forecasts. ML model validation program performs validation testing on trained ML models to generate one or more metrics that can indicate accuracy of predictions generated by the trained models. The resulting ML model(s) can be used to manage the network including but not limited to retrieve, instantiate and execute dynamic applications based on predictions made based on the models.
    Type: Grant
    Filed: April 23, 2020
    Date of Patent: December 28, 2021
    Assignee: ScienceLogic, Inc.
    Inventors: Shankar Ananthanarayanan, Nicole Eickhoff, Tim Herrmann, Matthew Luebke, Mathew Maloney
  • Patent number: 11205516
    Abstract: Systems and methods are disclosed for determining the appropriateness of medical interventions. In one embodiment, a machine learning system for determining the appropriateness of a selected medical intervention includes health-related data sources, the health-related data sources providing at least one data file of a first type, and a second data file of a second type. A machine learning module is configured to receive the first and second data files, perform a normalization procedure on at least one of the first and second data files, and apply at least one previously trained machine learning model to the normalized data files to produce a prediction output. The prediction output may include a confidence level associated with an appropriateness of the selected medical intervention.
    Type: Grant
    Filed: September 5, 2018
    Date of Patent: December 21, 2021
    Inventor: Daniel M. Lieberman
  • Patent number: 11200514
    Abstract: Unclassified observations are classified. Similarity values are computed for each unclassified observation and for each target variable value. A confidence value is computed for each unclassified observation using the similarity values. A high-confidence threshold value and a low-confidence threshold value are computed from the confidence values. For each observation, when the confidence value is greater than the high-confidence threshold value, the observation is added to a training dataset and, when the confidence value is greater than the low-confidence threshold value and less than the high-confidence threshold value, the observation is added to the training dataset based on a comparison between a random value drawn from a uniform distribution and an inclusion percentage value. A classification model is trained with the training dataset and classified observations. The trained classification model is executed with the unclassified observations to determine a label assignment.
    Type: Grant
    Filed: June 9, 2021
    Date of Patent: December 14, 2021
    Assignee: SAS Institute Inc.
    Inventors: Xu Chen, Xinmin Wu