Patents by Inventor Kenneth Shiring

Kenneth Shiring has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190130268
    Abstract: Techniques are disclosed for tensor radix point calculation in a neural network. A first tensor is obtained. A first set of weights is generated for the first tensor. An operation is evaluated to be performed by a layer within a deep neural network on the first tensor using the first set of weights. A set of output radix points is determined for the layer within the deep neural network based on the first tensor and the operation. An output tensor is calculated for the layer within the deep neural network using the set of output radix points, the first tensor, and the first set of weights. The operation is restarted, when the layer reports a hardware overflow, using an updated set of output radix points. The determining is further based on a radix point for the first tensor. The determining is further based on metadata for the first tensor.
    Type: Application
    Filed: October 30, 2018
    Publication date: May 2, 2019
    Inventors: Kenneth Shiring, Stephen Curtis Johnson
  • Publication number: 20190130276
    Abstract: Techniques are disclosed for tensor manipulation within a neural network and include training the neural network. An input tensor is obtained for manipulation within a deep neural network. The input tensor includes fixed-point numerical representations and tensor metadata and is applied to a layer within the deep neural network. The input tensor has variable radix points associated with the fixed-point values of the input tensor. A weighting tensor including metadata is determined for the input tensor applied to the layer. An output tensor is calculated from the layer within the deep neural network based on the input tensor and the weighting tensor. The output tensor has fixed-point values with a second set of variable radix points associated with the fixed-point values of the output tensor. The output tensor includes tensor metadata. The output tensor is propagated within the deep neural network.
    Type: Application
    Filed: October 25, 2018
    Publication date: May 2, 2019
    Inventors: Kenneth Shiring, Stephen Curtis Johnson
  • Publication number: 20180341734
    Abstract: Systems and methods are disclosed for computing resource configuration based on flow graph translation. First, a high-level description of logic circuitry is obtained and translated to generate a flow graph representing sequential operations. Using the flow graph, similar processing elements in an array are interchangeably configured to perform computational, communication, and storage tasks as needed. The sequential operations are executed using the array of interchangeable processing elements. Data is provided from the storage elements through the communication elements to the computational elements. Computational results are stored in the storage elements. Outputs from some of the computational elements provide inputs to other computational elements. Execution of the instructions can be controlled with time stepping. The processors are reconfigured as needed, based on changes to the flow graph, on subsequent time steps.
    Type: Application
    Filed: August 1, 2018
    Publication date: November 29, 2018
    Inventors: Samit Chaudhuri, Henrik Esbensen, Kenneth Shiring, Peter Ramyalal Suaris
  • Patent number: 10042966
    Abstract: Systems and methods are disclosed for computing resource allocation based on flow graph translation. First, a high-level description of logic circuitry is obtained and translated to generate a flow graph representing sequential operations. Using the flow graph, similar processing elements in an array are interchangeably allocated to perform computational, communication, and storage tasks as needed. The sequential operations are executed using the array of interchangeable processing elements. Data is provided from the storage elements through the communication elements to the computational elements. Computational results are stored in the storage elements. Outputs from some of the computational elements provide inputs to other computational elements. Execution of the instructions can be controlled with time stepping. The processors are reallocated as needed, based on changes to the flow graph.
    Type: Grant
    Filed: October 30, 2015
    Date of Patent: August 7, 2018
    Assignee: Wave Computing, Inc.
    Inventors: Samit Chaudhuri, Henrik Esbensen, Kenneth Shiring, Peter Ramyalal Suaris
  • Publication number: 20160125118
    Abstract: Systems and methods are disclosed for computing resource allocation based on flow graph translation. First, a high-level description of logic circuitry is obtained and translated to generate a flow graph representing sequential operations. Using the flow graph, similar processing elements in an array are interchangeably allocated to perform computational, communication, and storage tasks as needed. The sequential operations are executed using the array of interchangeable processing elements. Data is provided from the storage elements through the communication elements to the computational elements. Computational results are stored in the storage elements. Outputs from some of the computational elements provide inputs to other computational elements. Execution of the instructions can be controlled with time stepping. The processors are reallocated as needed, based on changes to the flow graph.
    Type: Application
    Filed: October 30, 2015
    Publication date: May 5, 2016
    Inventors: Samit Chaudhuri, Henrik Esbensen, Kenneth Shiring, Peter Ramyalal Suaris
  • Publication number: 20070027669
    Abstract: An improved method and system for development of passive simulation clients includes: running a simulation by a simulator; storing at least a portion of information from the simulation; retrieving the stored information by a simulation proxy; and recreating the simulation by the simulation proxy based on the retrieved information. Full or relevant subset of machine states may be stored in a storage mechanism, which is accessed by the simulation client through the simulation proxy. During code development, instead of accessing the simulator directly, the simulation client code is provided a cycle by cycle view of the simulation model from the storage mechanism as recreated by the simulation proxy. In this manner, development time is quicker as a full simulation environment need not be loaded and run. In addition, machine resources required during client development are reduced drastically.
    Type: Application
    Filed: July 13, 2005
    Publication date: February 1, 2007
    Applicant: International Business Machines Corporation
    Inventors: Anthony Bybell, Kanna Shimizu, Kenneth Shiring