Patents by Inventor Larry Marvin Wall

Larry Marvin Wall has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180300615
    Abstract: A deep neural network (DNN) module utilizes parallel kernel and parallel input processing to decrease bandwidth utilization, reduce power consumption, improve neuron multiplier stability, and provide other technical benefits. Parallel kernel processing enables the DNN module to load input data only once for processing by multiple kernels. Parallel input processing enables the DNN module to load kernel data only once for processing with multiple input data. The DNN module can implement other power-saving techniques like clock-gating (i.e. removing the clock from) and power-gating (i.e. removing the power from) banks of accumulators based upon usage of the accumulators. For example, individual banks of accumulators can be power-gated when all accumulators in a bank are not in use, and do not store data for a future calculation. Banks of accumulators can also be clock-gated when all accumulators in a bank are not in use, but store data for a future calculation.
    Type: Application
    Filed: April 12, 2018
    Publication date: October 18, 2018
    Inventors: Amol Ashok AMBARDEKAR, Chad Balling McBRIDE, George PETRE, Larry Marvin WALL, Kent D. CEDOLA, Boris BOBROV
  • Publication number: 20180300633
    Abstract: The performance of a neural network (NN) and/or deep neural network (DNN) can limited by the number of operations being performed as well as management of data among the various memory components of the NN/DNN. Using virtualized hardware iterators, data for processing by the NN/DNN can be traversed and configured to optimize the number of operations as well as memory utilization to enhance the overall performance of a NN/DNN. Operatively, an iterator controller can generate instructions for execution by the NN/DNN representative of one more desired iterator operation types and to perform one or more iterator operations. Data can be iterated according to a selected iterator operation and communicated to one or more neuron processors of the NN/DD for processing and output to a destination memory. The iterator operations can be applied to various volumes of data (e.g., blobs) in parallel or multiple slices of the same volume.
    Type: Application
    Filed: September 1, 2017
    Publication date: October 18, 2018
    Inventors: Chad Balling MCBRIDE, George PETRE, Amol Ashok AMBARDEKAR, Kent D. CEDOLA, Larry Marvin WALL, Boris BOBROV
  • Publication number: 20180299943
    Abstract: An exemplary computing environment having a DNN module can maintain one or more bandwidth throttling mechanisms. Illustratively, a first throttling mechanism can specify the number of cycles to wait between transactions on a cooperating fabric component (e.g., data bus). Illustratively, a second throttling mechanism can be a transaction count limiter that operatively sets a threshold of a number of transactions to be processed during a given transaction sequence and limits the number of transactions such as multiple transactions in flight to not exceed the set threshold. In an illustrative operation, in executing these two exemplary calculated throttling parameters, the average bandwidth usage and the peak bandwidth usage can be limited. Operatively, with this fabric bandwidth control, the processing units of the DNN are optimized to process data across each transaction cycle resulting in enhanced processing and lower power consumption.
    Type: Application
    Filed: April 11, 2018
    Publication date: October 18, 2018
    Inventors: Chad Balling McBRIDE, Timothy Hume HEIL, Amol Ashok AMBARDEKAR, George PETRE, Kent D. CEDOLA, Larry Marvin WALL, Boris BOBROV
  • Publication number: 20180300634
    Abstract: A direct memory access (DMA) engine may be responsible to enable and control DMA data flow within a computing system. The DMA engine moves blocks of data, associated with descriptors in a plurality of queues, from a source to a destination memory location or address, autonomously from control by a computer system's processor. Based on analysis of the data blocks linked to the descriptors in the queues, the DMA engine and its associated DMA fragmenter ensure that data blocks stored linked to descriptors in the queues do not remain idle for an exorbitant period of time. The DMA fragmenter may divide large data blocks into smaller data blocks to ensure that the processing of large data blocks does not preclude the timely processing of smaller data blocks associated with one or more descriptors in the queues. The data blocks stored may be two-dimensional data blocks.
    Type: Application
    Filed: September 12, 2017
    Publication date: October 18, 2018
    Inventors: Chad Balling McBRIDE, Amol Ashok AMBARDEKAR, Kent D. CEDOLA, George PETRE, Larry Marvin Wall, Boris BOBROV
  • Publication number: 20180300604
    Abstract: A deep neural network (DNN) processor is configured to execute layer descriptors in layer descriptor lists. The descriptors define instructions for performing a forward pass of a DNN by the DNN processor. The layer descriptors can also be utilized to manage the flow of descriptors through the DNN module. For example, layer descriptors can define dependencies upon other descriptors. Descriptors defining a dependency will not execute until the descriptors upon which they are dependent have completed. Layer descriptors can also define a “fence,” or barrier, function that can be used to prevent the processing of upstream layer descriptors until the processing of all downstream layer descriptors is complete. The fence bit guarantees that there are no other layer descriptors in the DNN processing pipeline before the layer descriptor that has the fence to be asserted is processed.
    Type: Application
    Filed: April 11, 2018
    Publication date: October 18, 2018
    Inventors: Chad Balling McBRIDE, Amol Ashok AMBARDEKAR, Kent D. CEDOLA, George PETRE, Larry Marvin WALL, Boris BOBROV
  • Publication number: 20180300613
    Abstract: The performance of a neural network (NN) can be limited by the number of operations being performed. Using a line buffer that is directed to shift a memory block by a selected shift stride for cooperating neurons, data that is operatively residing memory and which would require multiple write cycles into a cooperating line buffer can be processed as in a single line buffer write cycle thereby enhancing the performance of a NN/DNN. A controller and/or iterator can generate one or more instructions having the memory block shifting values for communication to the line buffer. The shifting values can be calculated using various characteristics of the input data as well as the NN/DNN inclusive of the data dimensions. The line buffer can read data for processing, shift the data of the memory block and write the data in the line buffer for subsequent processing.
    Type: Application
    Filed: December 1, 2017
    Publication date: October 18, 2018
    Inventors: George PETRE, Chad Balling McBRIDE, Amol Ashok AMBARDEKAR, Kent D. CEDOLA, Larry Marvin WALL, Boris BOBROV
  • Publication number: 20180300606
    Abstract: A deep neural network (“DNN”) module can compress and decompress neuron-generated activation data to reduce the utilization of memory bus bandwidth. The compression unit can receive an uncompressed chunk of data generated by a neuron in the DNN module. The compression unit generates a mask portion and a data portion of a compressed output chunk. The mask portion encodes the presence and location of the zero and non-zero bytes in the uncompressed chunk of data. The data portion stores truncated non-zero bytes from the uncompressed chunk of data. A decompression unit can receive a compressed chunk of data from memory in the DNN processor or memory of an application host. The decompression unit decompresses the compressed chunk of data using the mask portion and the data portion. This can reduce memory bus utilization, allow a DNN module to complete processing operations more quickly, and reduce power consumption.
    Type: Application
    Filed: April 13, 2018
    Publication date: October 18, 2018
    Inventors: Joseph Leon CORKERY, Benjamin Eliot LUNDELL, Larry Marvin WALL, Chad Balling McBRIDE, Amol Ashok AMBARDEKAR, George PETRE, Kent D. CEDOLA, Boris BOBROV
  • Publication number: 20180300602
    Abstract: The performance of a neural network (NN) and/or deep neural network (DNN) can limited by the number of operations being performed as well as management of data among the various memory components of the NN/DNN. Using a directed line buffer that operatively inserts one or more shifting bits in data blocks to be processed, data read/writes to the line buffer can be optimized for processing by the NN/DNN thereby enhancing the overall performance of a NN/DNN. Operatively, an operations controller and/or iterator can generate one or more instructions having a calculated shifting bit(s) for communication to the line buffer. Illustratively, the shifting bit(s) can be calculated using various characteristics of the input data as well as the NN/DNN inclusive of the data dimensions. The line buffer can read data for processing, insert the shifting bits and write the data in the line buffer for subsequent processing by cooperating processing unit(s).
    Type: Application
    Filed: October 17, 2017
    Publication date: October 18, 2018
    Inventors: George PETRE, Chad Balling McBRIDE, Amol Ashok AMBARDEKAR, Kent D. CEDOLA, Larry Marvin WALL, Boris BOBROV
  • Publication number: 20180300601
    Abstract: Optimized memory usage and management is crucial to the overall performance of a neural network (NN) or deep neural network (DNN) computing environment. Using various characteristics of the input data dimension, an apportionment sequence is calculated for the input data to be processed by the NN or DNN that optimizes the efficient use of the local and external memory components. The apportionment sequence can describe how to parcel the input data (and its associated processing parameters—e.g., processing weights) into one or more portions as well as how such portions of input data (and its associated processing parameters) are passed between the local memory, external memory, and processing unit components of the NN or DNN. Additionally, the apportionment sequence can include instructions to store generated output data in the local and/or external memory components so as to optimize the efficient use of the local and/or external memory components.
    Type: Application
    Filed: September 28, 2017
    Publication date: October 18, 2018
    Inventors: Kent D. CEDOLA, Chad Balling McBRIDE, Amol Ashok AMBARDEKAR, George PETRE, Larry Marvin WALL, Boris BOBROV
  • Publication number: 20180300617
    Abstract: An exemplary artificial intelligence/machine learning hardware computing environment having an exemplary DNN module cooperating with one or more memory components can perform data sharing and distribution as well reuse of a buffer data to reduce the number of memory component read/writes thereby enhancing overall hardware performance and reducing power consumption. Illustratively, data from a cooperating memory component is read according to a selected operation of the exemplary hardware and written to corresponding other memory component for use by one or more processing elements (e.g., neurons). The data is read in such a manner to optimize the engagement of the one or more processing elements for each processing cycle as well as to reuse data previously stored in the one or more cooperating memory components. Operatively, the written data is copied to a shadow memory buffer prior to being consumed by the processing elements.
    Type: Application
    Filed: April 13, 2018
    Publication date: October 18, 2018
    Inventors: Chad Balling McBRIDE, Amol Ashok AMBARDEKAR, Kent D. CEDOLA, Boris BOBROV, George PETRE, Larry Marvin WALL
  • Publication number: 20180300603
    Abstract: The performance of a neural network (NN) and/or deep neural network (DNN) can limited by the number of operations being performed as well as memory data management of a NN/DNN. Using vector quantization of neuron weight values, the processing of data by neurons can be optimize the number of operations as well as memory utilization to enhance the overall performance of a NN/DNN. Operatively, one or more contiguous segments of weight values can be converted into one or more vectors of arbitrary length and each of the one or more vectors can be assigned an index. The generated indexes can be stored in an exemplary vector quantization lookup table and retrieved by exemplary fast weight lookup hardware at run time on the flyas part of an exemplary data processing function of the NN as part of an inline de-quantization operation to obtain needed one or more neuron weight values.
    Type: Application
    Filed: January 26, 2018
    Publication date: October 18, 2018
    Inventors: Amol Ashok AMBARDEKAR, Aleksandar TOMIC, Chad Balling McBRIDE, George PETRE, Kent D. CEDOLA, Larry Marvin Wall, Boris BOBROV
  • Publication number: 20180300605
    Abstract: A deep neural network (“DNN”) module can determine whether processing of certain values in an input buffer or a weight buffer by neurons can be skipped. For example, the DNN module might determine whether neurons can skip the processing of values in entire columns of a neuron buffer. Processing of these values might be skipped if an entire column of an input buffer or a weight buffer are zeros, for example. The DNN module can also determine whether processing of single values in rows of the input buffer or the weight buffer can be skipped (e.g. if the values are zero). Neurons that complete their processing early as a result of skipping operations can assist other neurons with their processing. A combination operation can be performed following the completion of processing that transfers the results of the processing operations performed by a neuron to their correct owner.
    Type: Application
    Filed: April 13, 2018
    Publication date: October 18, 2018
    Inventors: Amol Ashok AMBARDEKAR, Chad Balling McBRIDE, George PETRE, Larry Marvin WALL, Kent D. CEDOLA, Boris BOBROV
  • Patent number: 7149849
    Abstract: Data is served from a data source to a user. The data source has a plurality of pre-defined user groups. A request is received from the user for the data from the data source and a cache key corresponding to the requesting user is generated based on a set of the user groups of such user. The generated cache key represents access rights for the user based on the set of the user groups of the user. Thereafter, it is determined whether any data that satisfies the request is stored in the cache with the generated cache key.
    Type: Grant
    Filed: August 2, 2005
    Date of Patent: December 12, 2006
    Assignee: Microsoft Corporation
    Inventors: Larry Marvin Wall, Glen Buhlmann, Nicholas Duncan, Kristof Roomp
  • Patent number: 6959362
    Abstract: Data is served from a data source to a user by way of an interface having a cache. The data source has a plurality of pre-defined user groups. The interface receives a request from the user for the data from the data source and requests the data source to provide a cache key corresponding to the requesting user. The data source generates the cache key for the requesting user based on a set of the user groups of such user and returns the generated cache key to the interface. The generated cache key represents exact access rights for the user based on the set of the user groups of the user. The interface thereafter determines whether any data that satisfies the request is stored in the cache with the generated cache key.
    Type: Grant
    Filed: May 7, 2003
    Date of Patent: October 25, 2005
    Assignee: Microsoft Corporation
    Inventors: Larry Marvin Wall, Glen Buhlmann, Nicholas Duncan, Kristof Roomp
  • Publication number: 20040225848
    Abstract: Data is served from a data source to a user by way of an interface having a cache. The data source has a plurality of pre-defined user groups. The interface receives a request from the user for the data from the data source and requests the data source to provide a cache key corresponding to the requesting user. The data source generates the cache key for the requesting user based on a set of the user groups of such user and returns the generated cache key to the interface. The generated cache key represents exact access rights for the user based on the set of the user groups of the user. The interface thereafter determines whether any data that satisfies the request is stored in the cache with the generated cache key.
    Type: Application
    Filed: May 7, 2003
    Publication date: November 11, 2004
    Applicant: Microsoft Corporation
    Inventors: Larry Marvin Wall, Glen Buhlmann, Nicholas Duncan, Kristof Roomp