Patents Examined by David R. Vincent
  • Patent number: 11966841
    Abstract: An apparatus for artificial intelligence acceleration is provided. The apparatus includes a storage and compute system having a distributed, redundant key value store for metadata. The storage and compute system having distributed compute resources configurable to access, through a plurality of authorities, data in the solid-state memory, run inference with a deep learning model, generate vectors for the data and store the vectors in the key value store.
    Type: Grant
    Filed: January 27, 2021
    Date of Patent: April 23, 2024
    Assignee: PURE STORAGE, INC.
    Inventors: Fabio Margaglia, Emily Potyraj, Hari Kannan, Cary A. Sandvig
  • Patent number: 11960843
    Abstract: Techniques and systems are provided for training a machine learning model using different datasets to perform one or more tasks. The machine learning model can include a first sub-module configured to perform a first task and a second sub-module configured to perform a second task. The first sub-module can be selected for training using a first training dataset based on a format of the first training dataset. The first sub-module can then be trained using the first training dataset to perform the first task. The second sub-module can be selected for training using a second training dataset based on a format of the second training dataset. The second sub-module can then be trained using the second training dataset to perform the second task.
    Type: Grant
    Filed: May 2, 2019
    Date of Patent: April 16, 2024
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Trung Huu Bui, Scott Cohen, Mingyang Ling, Chenyun Wu
  • Patent number: 11954592
    Abstract: The disclosure provides a collaborative deep learning method and a collaborative deep learning apparatus. The method includes: sending an instruction for downloading a global model to a plurality of user terminals; receiving a set of changes from each user terminal; storing the set of changes; recording a hash value of the set of changes into a blockchain; obtaining a storage transaction number from the blockchain for the hash value of the set of changes; sending the set of changes and the storage transaction number to the plurality of user terminals; receiving the set of target user terminals from the blockchain; updating the current parameters of the global model based on sets of changes corresponding to the set of target user terminals; and returning the sending the instruction, to update the global model until the global model meets a preset condition.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: April 9, 2024
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Ke Xu, Zhichao Zhang, Bo Wu, Qi Li, Songsong Xu
  • Patent number: 11941500
    Abstract: Disclosed is a system and a method for engagement of human agents for decision-making in a dynamically changing environment. An information request related to a problem requiring a decision is received. Further, problem data comprising metadata associated to the problem, and decision-making data is received. Then, an information type is determined for the information request. Subsequently a set of human agents from a list of one or more human agents is determined using an engagement model. Further, a request elicitation type is determined for the set of human agents using an elicitation model. Further, an input is received from the set of human agents. Further, the input is used to retrain the engagement model and the elicitation model. Finally, the decision-making data is continuously enhanced based on the input received, the request elicitation type, and the information type.
    Type: Grant
    Filed: December 20, 2022
    Date of Patent: March 26, 2024
    Assignee: AGILE SYSTEMS, LLC
    Inventors: Satyendra Pal Rana, Ekrem Alper Murat, Ratna Babu Chinnam
  • Patent number: 11928577
    Abstract: A parallel convolutional neural network is provided. The CNN is implemented by a plurality of convolutional neural networks each on a respective processing node. Each CNN has a plurality of layers. A subset of the layers are interconnected between processing nodes such that activations are fed forward across nodes. The remaining subset is not so interconnected.
    Type: Grant
    Filed: April 27, 2020
    Date of Patent: March 12, 2024
    Assignee: Google LLC
    Inventors: Alexander Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton
  • Patent number: 11929170
    Abstract: A system for selecting an ameliorative output using artificial intelligence includes at least a server configured to receive at least a prognostic output. At least a server is configured to generate a plurality of ameliorative outputs as a function of at least a prognostic output wherein the plurality of ameliorative outputs include at least a short-term indicator and at least a long-term indicator. At least a server is configured to receive at least a user life element datum wherein the at least a user life element datum includes at least a user life quality response. At least a server is configured to generate a loss function of the plurality of short-term indicators and the plurality of long-term indicators using at least a user life element datum. At least a server is configured to select at least an ameliorative output from a plurality of ameliorative outputs to minimize the loss function.
    Type: Grant
    Filed: August 22, 2019
    Date of Patent: March 12, 2024
    Assignee: KPN Innovations, LLC
    Inventor: Kenneth Neumann
  • Patent number: 11929179
    Abstract: Apparatuses, systems, methods, and computer program products are disclosed for multi-modal machine learning medical assessment. A source module is configured to receive multiple types of data for a user. A machine learning module is configured to analyze the multiple types of data using machine learning to determine multiple predictions of likelihoods of the user getting a neurological disease. A multi-modal result module configured to determine a single result indicating a likelihood of the user getting the neurological disease based on the multiple predictions.
    Type: Grant
    Filed: March 17, 2023
    Date of Patent: March 12, 2024
    Inventor: Danika Gupta
  • Patent number: 11915132
    Abstract: Artificial neural networks (ANNs) are a distributed computing model in which computation is accomplished with many simple processing units, called neurons, with data embodied by the connections between neurons, called synapses, and by the strength of these connections, the synaptic weights. An attractive implementation of ANNs uses the conductance of non-volatile memory (NVM) elements to record the synaptic weight, with the important multiply—accumulate step performed in place, at the data. In this application, the non-idealities in the response of the NVM such as nonlinearity, saturation, stochasticity and asymmetry in response to programming pulses lead to reduced network performance compared to an ideal network implementation.
    Type: Grant
    Filed: May 25, 2021
    Date of Patent: February 27, 2024
    Assignee: International Business Machines Corporation
    Inventor: Geoffrey W. Burr
  • Patent number: 11915135
    Abstract: The disclosure discloses a graph optimization method and apparatus for neural network computation. The graph optimization method includes the following steps: S1: converting a computation graph; S2: allocating a register; S3: defining a route selector for a redefined variable; S4: solving the route selector for the redefined variable; S5: defining a criterion of inserting the route selector for the redefined variable into a node; S6: analyzing a dominating edge set of the node for the redefined variable; S7: inserting the route selector for the redefined variable; and S8: renaming the redefined variable. The disclosure solves the problem of the corresponding route selection on a correct definition of the redefined variable when a node including the redefined variable in a computation graph in the compiling period flows through multiple paths of computation flow, reduces the memory cost and promotes the development of implementation application of a deep neural network model.
    Type: Grant
    Filed: September 21, 2022
    Date of Patent: February 27, 2024
    Assignee: ZHEJIANG LAB
    Inventors: Hongsheng Wang, Guang Chen
  • Patent number: 11900234
    Abstract: Apparatuses and methods of manufacturing same, systems, and methods for generating a convolutional neural network (CNN) are described. In one aspect, a minimal CNN having, e.g., three or more layers is trained. Cascade training may be performed on the trained CNN to insert one or more intermediate layers until a training error is less than a threshold. When cascade training is complete, cascade network trimming of the CNN output from the cascade training may be performed to improve computational efficiency. To further reduce network parameters, convolutional filters may be replaced with dilated convolutional filters with the same receptive field, followed by additional training/fine-tuning.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: February 13, 2024
    Inventors: Haoyu Ren, Mostafa El-Khamy, Jungwon Lee
  • Patent number: 11893483
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating an output sequence from an input sequence. In one aspect, one of the systems includes an encoder neural network configured to receive the input sequence and generate encoded representations of the network inputs, the encoder neural network comprising a sequence of one or more encoder subnetworks, each encoder subnetwork configured to receive a respective encoder subnetwork input for each of the input positions and to generate a respective subnetwork output for each of the input positions, and each encoder subnetwork comprising: an encoder self-attention sub-layer that is configured to receive the subnetwork input for each of the input positions and, for each particular input position in the input order: apply an attention mechanism over the encoder subnetwork inputs using one or more queries derived from the encoder subnetwork input at the particular input position.
    Type: Grant
    Filed: August 7, 2020
    Date of Patent: February 6, 2024
    Assignee: Google LLC
    Inventors: Noam M. Shazeer, Aidan Nicholas Gomez, Lukasz Mieczyslaw Kaiser, Jakob D. Uszkoreit, Llion Owen Jones, Niki J. Parmar, Illia Polosukhin, Ashish Teku Vaswani
  • Patent number: 11887126
    Abstract: A machine learning-based method for accelerating a generation of automated fraud or abuse detection workflows in a digital threat mitigation platform includes identifying a plurality of distinct digital event features indicative of digital fraud; automatically deriving a plurality of distinct digital event decisioning criteria based on the plurality of distinct digital event features and a digital event data corpus associated with a target subscriber; automatically constructing a probationary automated fraud or abuse detection workflow based on the plurality of distinct digital event decisioning criteria, and deploying the probationary automated fraud or abuse detection workflow to a target digital fraud prevention environment associated with the target subscriber.
    Type: Grant
    Filed: March 17, 2023
    Date of Patent: January 30, 2024
    Assignee: Sift Science, Inc.
    Inventors: Pradhan Umesh, Natasha Sehgal, Chang Liu
  • Patent number: 11887734
    Abstract: Provided herein are methods and systems for making patient-specific therapy recommendations of a lipid-lowering therapy for patients with known or suspected cardiovascular disease, such as atherosclerosis.
    Type: Grant
    Filed: June 10, 2022
    Date of Patent: January 30, 2024
    Assignee: Elucid Bioimaging Inc.
    Inventors: Andrew J. Buckler, Ulf Hedin, Ljubica Matic, Matthew Phillips
  • Patent number: 11868864
    Abstract: Methods, systems, and computer storage media for implementing neural networks in fixed point arithmetic computing systems. In one aspect, a method includes the actions of receiving a request to process a neural network using a processing system that performs neural network computations using fixed point arithmetic; for each node of each layer of the neural network, determining a respective scaling value for the node from the respective set of floating point weight values for the node; and converting each floating point weight value of the node into a corresponding fixed point weight value using the respective scaling value for the node to generate a set of fixed point weight values for the node; and providing the sets of fixed point floating point weight values for the nodes to the processing system for use in processing inputs using the neural network.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: January 9, 2024
    Assignee: Google LLC
    Inventor: William John Gulland
  • Patent number: 11847567
    Abstract: Some embodiments provide a method that receives a network with trained floating-point weight values. The network includes layers of nodes, each of which computes an output value based on input values and trained weight values. To replace a first layer of the trained network in a modified network with quantized weight values, the method defines multiple replica layers. Each replica layer includes nodes that correspond to nodes of the first layer, has a different set of allowed quantized weight values, and receives the same input values from a previous layer of the modified network such that groups of corresponding nodes from the replica layers operate correspondingly to the first layer. The method trains the quantized weight values of the modified network using a loss function with terms that account for effect on the loss function due to the quantization and for interactions between corresponding weight values of the replica layers.
    Type: Grant
    Filed: July 7, 2020
    Date of Patent: December 19, 2023
    Assignee: PERCEIVE CORPORATION
    Inventors: Eric A. Sather, Steven L. Teig, Alexandru F. Drimbarean
  • Patent number: 11842273
    Abstract: To perform neural network processing to modify an input data array to generate a corresponding output data array using a filter comprising an array of weight data, at least one of the input data array and the filter are subdivided into a plurality of portions, a plurality of neural network processing passes using the portions are performed, and the output generated by each processing pass is combined to provide the output data array.
    Type: Grant
    Filed: September 23, 2020
    Date of Patent: December 12, 2023
    Assignee: Arm Limited
    Inventors: John Wakefield Brothers, III, Rune Holm, Elliott Maurice Simon Rosemarine
  • Patent number: 11816553
    Abstract: Application of the output from a recurrent artificial neural network to a variety of different applications. A method can include identifying topological patterns of activity in a recurrent artificial neural network, outputting a collection of digits, and inputting a first digit of the collection to a first application that is designed to fulfil a first purpose and to a second application that is designed to fulfil a second purpose, wherein the first purpose differs from the second purpose. The topological patterns are responsive to an input of data into the recurrent artificial neural network and each topological pattern abstracts a characteristic of the input data. Each digit represents whether one of the topological patterns of activity has been identified in the artificial neural network.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: November 14, 2023
    Assignee: INAIT SA
    Inventors: Henry Markram, Felix Schürmann, Fabien Jonathan Delalondre, Daniel Milan Lütgehetmann, John Rahmon, Constantin Cosmin Atanasoaei, Michele De Gruttola
  • Patent number: 11790238
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using multi-task neural networks. One of the methods includes receiving a first network input and data identifying a first machine learning task to be performed on the first network input; selecting a path through the plurality of layers in a super neural network that is specific to the first machine learning task, the path specifying, for each of the layers, a proper subset of the modular neural networks in the layer that are designated as active when performing the first machine learning task; and causing the super neural network to process the first network input using (i) for each layer, the modular neural networks in the layer that are designated as active by the selected path and (ii) the set of one or more output layers corresponding to the identified first machine learning task.
    Type: Grant
    Filed: August 17, 2020
    Date of Patent: October 17, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Daniel Pieter Wierstra, Chrisantha Thomas Fernando, Alexander Pritzel, Dylan Sunil Banarse, Charles Blundell, Andrei-Alexandru Rusu, Yori Zwols, David Ha
  • Patent number: 11775806
    Abstract: Disclosed is a method of compressing a neural network model that is performed by a computing device. The method includes receiving a trained model and compression method instructions for compressing the trained model, identifying a compressible block and a non-compressible block among a plurality of blocks included in the trained model based on the compression method instructions, transmitting a command to a user device that causes the user device to: display a structure of the trained model representing a connection relationship between the plurality of blocks on a first screen such that the compressible block and the non-compressible block are visually distinguished, and display, on a second screen, an input field operable to receive a parameter value entered by a user for compression of the compressible block, and compressing the trained model based on the parameter value entered by the user in the input field.
    Type: Grant
    Filed: February 2, 2023
    Date of Patent: October 3, 2023
    Assignee: NOTA, INC.
    Inventors: Yoo Chan Kim, Jong Won Baek, Geun Jae Lee
  • Patent number: 11755949
    Abstract: Aspects of the disclosure relate to systems, methods, and computing devices for managing the processing and execution of machine learning classifiers across a variety of platforms. Machine classifiers can be developed to process a variety of input datasets. In several embodiments, a variety of transformations can be performed on raw data to generate the input datasets. The raw data can be obtained from a disparate set of data sources each having its own data format. The generated input datasets can be formatted using a common data format and/or a data format specific for a particular machine learning classifier. A sequence of machine learning classifiers to be executed can be determined and the machine learning classifiers can be executed on one or more computing devices to process the input datasets. The execution of the machine learning classifiers can be monitored and notifications can be transmitted to various computing devices.
    Type: Grant
    Filed: May 19, 2020
    Date of Patent: September 12, 2023
    Assignee: Allstate Insurance Company
    Inventors: Patrick O'Reilly, Nilesh Malpekar, Robert Andrew Nendorf, Bich-Thuy Le, Younuskhan Mohamed Iynoolkhan