Patents Examined by Sidney Vincent Bostwick
  • Patent number: 11971779
    Abstract: Computing technology for managing support requests are provided. The technology includes a processor executable application programming interface (API) that receives a support case indicating a problem associated with a device. The API utilizes a training model to predict a problem category for the support case. The training model predicts the problem category based on a feature extracted from information included in the support case. The training model further identifies a plurality of proximate support cases based on a distance between the support case and the proximate support cases within a virtual space assigned to the predicted problem category; determines relevance of each proximate support case to the support case; and outputs a resolution code for the support case based on the determined relevance of each proximate support case.
    Type: Grant
    Filed: February 20, 2020
    Date of Patent: April 30, 2024
    Assignee: NETAPP, INC.
    Inventors: Vedavyas Bhamidipati, Prajwal V
  • Patent number: 11966835
    Abstract: A sparse convolutional neural network accelerator system that dynamically and efficiently identifies fine-grained parallelism in sparse convolution operations. The system determines matching pairs of non-zero input activations and weights from the compacted input activation and weight arrays utilizing a scalable, dynamic parallelism discovery unit (PDU) that performs a parallel search on the input activation array and the weight array to identify reducible input activation and weight pairs.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: April 23, 2024
    Assignee: NVIDIA CORP.
    Inventors: Ching-En Lee, Yakun Shao, Angshuman Parashar, Joel Emer, Stephen W. Keckler
  • Patent number: 11934935
    Abstract: A feedforward generative neural network that generates an output example that includes multiple output samples of a particular type in a single neural network inference. Optionally, the generation may be conditioned on a context input. For example, the feedforward generative neural network may generate a speech waveform that is a verbalization of an input text segment conditioned on linguistic features of the text segment.
    Type: Grant
    Filed: May 21, 2018
    Date of Patent: March 19, 2024
    Assignee: DeepMind Technologies Limited
    Inventors: Aaron Gerard Antonius van den Oord, Karen Simonyan, Oriol Vinyals
  • Patent number: 11907826
    Abstract: An electronic apparatus for performing machine learning a method of machine learning, and a non-transitory computer-readable recording medium are provided. The electronic apparatus includes an operation module configured to include a plurality of processing elements arranged in a predetermined pattern and share data between the plurality of processing elements which are adjacent to each other to perform an operation; and a processor configured to control the operation module to perform a convolution operation by applying a filter to input data, wherein the processor controls the operation module to perform the convolution operation by inputting each of a plurality of elements configuring a two-dimensional filter to the plurality of processing elements in a predetermined order and sequentially applying the plurality of elements to the input data.
    Type: Grant
    Filed: March 23, 2018
    Date of Patent: February 20, 2024
    Inventors: Kyoung-Hoon Kim, Young-hwan Park, Ki-seok Kwon, Suk-jin Kim, Chae-seok Im, Han-su Cho, Sang-bok Han, Seung-won Lee, Kang-jin Yoon
  • Patent number: 11875252
    Abstract: Some embodiments are directed to a neural network training device for training a neural network. At least one layer of the neural network layers is a projection layer. The projection layer projects a layer input vector (x) of the projection layer to a layer output vector (y). The output vector (y) sums to the summing parameter (k).
    Type: Grant
    Filed: May 17, 2019
    Date of Patent: January 16, 2024
    Inventors: Brandon David Amos, Vladlen Koltun, Jeremy Zieg Kolter, Frank RĂ¼diger Schmidt
  • Patent number: 11861500
    Abstract: A meta-learning system includes an inner function computation module, adapted to compute output data from applied input data according to an inner model function, depending on model parameters; an error computation module, adapted to compute errors indicating mismatches between the computed output data and target values; a state update module, adapted to update the model parameters of the inner model function according to an updated state, updated based on a current state of the state update module, in response to an error received from the error computation module. The state update module is learned to adjust the model parameters of the inner model function, such that a following training of the inner model function with training data is improved.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: January 2, 2024
    Assignee: SIEMENS HEALTHCARE GMBH
    Inventor: Martin Kraus
  • Patent number: 11847553
    Abstract: Neural network processing hardware using parallel computational architectures with reconfigurable core-level and vector-level parallelism is provided. In various embodiments, a neural network model memory is adapted to store a neural network model comprising a plurality of layers. Each layer has at least one dimension and comprises a plurality of synaptic weights. A plurality of neural cores is provided. Each neural core includes a computation unit and an activation memory. The computation unit is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of output activations. The computation unit has a plurality of vector units. The activation memory is adapted to store the input activations and the output activations. The system is adapted to partition the plurality of cores into a plurality of partitions based on dimensions of the layer and the vector units.
    Type: Grant
    Filed: June 14, 2018
    Date of Patent: December 19, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Andrew S. Cassidy, Myron D. Flickner, Pallab Datta, Hartmut Penner, Rathinakumar Appuswamy, Jun Sawada, John V. Arthur, Dharmendra S. Modha, Steven K. Esser, Brian Taba, Jennifer Klamo
  • Patent number: 11823024
    Abstract: The present disclosure provides directed to new, more efficient neural network architectures. As one example, in some implementations, the neural network architectures of the present disclosure can include a linear bottleneck layer positioned structurally prior to and/or after one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. As another example, in some implementations, the neural network architectures of the present disclosure can include one or more inverted residual blocks where the input and output of the inverted residual block are thin bottleneck layers, while an intermediate layer is an expanded representation. For example, the expanded representation can include one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. A residual shortcut connection can exist between the thin bottleneck layers that play a role of an input and output of the inverted residual block.
    Type: Grant
    Filed: July 22, 2021
    Date of Patent: November 21, 2023
    Assignee: GOOGLE LLC
    Inventors: Andrew Gerald Howard, Mark Sandler, Liang-Chieh Chen, Andrey Zhmoginov, Menglong Zhu
  • Patent number: 11816555
    Abstract: Systems, computer program products, and computer-implemented methods for determining relationships between one or more outputs of a first model and one or more inputs of a second model that collectively represent a real world system, and chaining the models together. For example, the system described herein may determine how to chain a plurality of models by training an artificial intelligence system using the nodes of the models such that the trained artificial intelligence system predicts related output and input node connections. The system may then link related nodes to chain the models together. The systems, computer program products, and computer-implemented methods may thus, according to various embodiments, enable a plurality of discrete models to be optimally chained.
    Type: Grant
    Filed: February 9, 2021
    Date of Patent: November 14, 2023
    Assignee: Palantir Technologies Inc.
    Inventors: Jesse Rickard, Andrew Floren, Timothy Slatcher, David Skiff, Thomas McArdle, David Fowler, Aravind Baratha Raj
  • Patent number: 11734545
    Abstract: The present disclosure provides directed to new, more efficient neural network architectures. As one example, in some implementations, the neural network architectures of the present disclosure can include a linear bottleneck layer positioned structurally prior to and/or after one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. As another example, in some implementations, the neural network architectures of the present disclosure can include one or more inverted residual blocks where the input and output of the inverted residual block are thin bottleneck layers, while an intermediate layer is an expanded representation. For example, the expanded representation can include one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. A residual shortcut connection can exist between the thin bottleneck layers that play a role of an input and output of the inverted residual block.
    Type: Grant
    Filed: February 17, 2018
    Date of Patent: August 22, 2023
    Assignee: GOOGLE LLC
    Inventors: Andrew Gerald Howard, Mark Sandler, Liang-Chieh Chen, Andrey Zhmoginov, Menglong Zhu
  • Patent number: 11733970
    Abstract: An artificial intelligence system includes a neural network layer including an arithmetic operation circuit that performs an arithmetic operation of a sigmoid function. The arithmetic operation circuit includes a first circuit configured to perform an exponent arithmetic operation using a Napier's constant e as a base and output a first calculation result when an exponent in the exponent arithmetic operation is a negative number, wherein an absolute value of the exponent is used in the exponent arithmetic operation, and a second circuit configured to subtract the first calculation result obtained by the first circuit from 1 and output the subtracted value.
    Type: Grant
    Filed: March 3, 2020
    Date of Patent: August 22, 2023
    Assignees: Kabushiki Kaisha Toshiba, Toshiba Electronic Devices & Storage Corporation
    Inventor: Masanori Nishizawa
  • Patent number: 11698529
    Abstract: Disclosed herein is a method for using a neural network across multiple devices. The method can include receiving, by a first device configured with a first one or more layers of a neural network, input data for processing via the neural network implemented across the first device and a second device. The method can include outputting, by the first one or more layers of the neural network implemented on the first device, a data set that is reduced in size relative to the input data while identifying one or more features of the input data for processing by a second one or more layers of the neural network. The method can include communicating, by the first device, the data set to the second device for processing via the second one or more layers of the neural network implemented on the second device.
    Type: Grant
    Filed: July 9, 2019
    Date of Patent: July 11, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Liangzhen Lai, Pierce I-Jen Chuang, Vikas Chandra, Ganesh Venkatesh
  • Patent number: 11675693
    Abstract: A novel and useful neural network (NN) processing core incorporating inter-device connectivity and adapted to implement artificial neural networks (ANNs). A chip-to-chip interface spreads a given ANN model across multiple devices in a seamless manner. The NN processor is constructed from self-contained computational units organized in a hierarchical architecture. The homogeneity enables simpler management and control of similar computational units, aggregated in multiple levels of hierarchy. Computational units are designed with minimal overhead as possible, where additional features and capabilities are aggregated at higher levels in the hierarchy. On-chip memory provides storage for content inherently required for basic operation at a particular hierarchy and is coupled with the computational resources in an optimal ratio. Lean control provides just enough signaling to manage only the operations required at a particular hierarchical level.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: June 13, 2023
    Inventors: Avi Baum, Or Danon, Hadar Zeitlin, Daniel Ciubotariu, Rami Feig
  • Patent number: 11637772
    Abstract: Systems and techniques for machine generation of content names in an information centric network (ICN) are described herein. For example, a node may obtain content. An inference engine may be invoked to produce a name for the content. Once the content is named, the node may respond to an interest packet that includes the name of the content. The response is a data packet that includes the content.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: April 25, 2023
    Assignee: Intel Corporation
    Inventors: Venkatesan Nallampatti Ekambaram, Satish Chandra Jha, Ned M. Smith, S. M. Iftekharul Alam, Maria Ramirez Loaiza, Yi Zhang, Gabriel Arrobo Vidal
  • Patent number: 11620500
    Abstract: A synapse system is provided which includes three transistors and a resistance-switching element arranged between two neurons. The resistance-switching element has a resistance value and it is arranged between two neurons. A first transistor is connected between the resistance-switching element and one of the neurons. A second transistor and a third transistor are arranged between the two neurons, and are connected in series which interconnects with the gate of the first transistor. A first input signal is transmitted from one of the neurons to the other neuron through the first transistor. A second input signal is transmitted from one of the neurons to the other neuron through the second transistor and the third transistor. The resistance value of the resistance-switching element is changed based on the time difference between the first input signal and the second input signal.
    Type: Grant
    Filed: January 11, 2018
    Date of Patent: April 4, 2023
    Assignee: WINBOND ELECTRONICS CORP.
    Inventors: Frederick Chen, Ping-Kun Wang, Shao-Ching Liao, Chih-Cheng Fu, Ming-Che Lin, Yu-Ting Chen, Seow-Fong (Dennis) Lim
  • Patent number: 11586928
    Abstract: A method and system for incorporating regression into a Stacked Auto Encoder utilizing deep learning based regression technique that enables joint learning of parameters for a regression model to train the SAE for a regression problem. The method comprises generating a regression model for the SAE for solving the regression problem, wherein regression model is formulated as a non-convex joint optimization function for an asymmetric SAE. The method further comprises reformulating the non-convex joint optimization function as an Augmented Lagrangian formulation in terms of a plurality of proxy variables and a plurality of hyper parameters. The method comprises splitting the Augmented Lagrangian formulation into sub-problems using Alternating Direction Method of Multipliers and jointly learning parameters for the regression model to train the SAE for the regression problem. The learned weights enable estimating the unknown target values.
    Type: Grant
    Filed: February 1, 2019
    Date of Patent: February 21, 2023
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Tulika Bose, Angshul Majumdar, Tanushyam Chattopadhyay
  • Patent number: 11475275
    Abstract: A computer-implemented method for inferring a 3D structure of a genome is provided. The method includes providing genome interaction data and operating an autoencoder including a structured sequence of n autoencoder units, each of which including an encoder unit and a decoder unit, each of which is implemented as a recurrent neural network unit. The method includes additionally training the autoencoder by feeding all vectors of genome interaction data to the encoder units. Thereby, the training of the auto-encoder units is performed stepwise by using inner state of respective previous autoencoder units in the cascaded sequence of autoencoder units and performing backpropagation within each of the plurality of autoencoder units after all autoencoder units have processed their respective input values, and using the output values of the encoder units for deriving a 3D model for a visualization of the genome.
    Type: Grant
    Filed: July 18, 2019
    Date of Patent: October 18, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Maria Anna Rapsomaniki, Bianca-Cristina Cristescu, Maria Rodriguez Martinez