Patents by Inventor Alan Gatherer

Alan Gatherer has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20170169034
    Abstract: A data warehouse engine (DWE) includes a central processing unit (CPU) core and a first data organization unit (DOU), where the first DOU is configured to aggregate read operations. The DWE also includes a first command queue coupled between the CPU core and the first DOU, where the first command queue is configured to convey commands from the CPU core to the first DOU.
    Type: Application
    Filed: November 30, 2016
    Publication date: June 15, 2017
    Inventors: Ashish Rai Shrivastava, Alex Elisa Chandra, Mark Brown, Debashis Bhattacharya, Alan Gatherer
  • Publication number: 20170168792
    Abstract: A method includes obtaining, by a first processor, a first software architecture description file and obtaining, by the first processor, a platform independent model file. The method also includes obtaining, by the first processor, a platform architecture definition file and performing, by the first processor, a first source-to-source compilation in accordance with the first software architecture description file, the platform independent model file, and the platform architecture definition file, to produce generated interface code. Additionally, the method includes generating, by the first processor, run time code, in accordance with the generated interface code and running, by a second processor in real time, the run time code.
    Type: Application
    Filed: December 14, 2016
    Publication date: June 15, 2017
    Inventors: Debashis Bhattacharya, Alan Gatherer, Mark Brown, Lee Dobson McFearin, Alex Elisa Chandra, Ashish Rai Shrivastava
  • Publication number: 20170163698
    Abstract: A data streaming unit (DSU) and a method for operating a DSU are disclosed. In an embodiment the DSU includes a memory interface configured to be connected to a storage unit, a compute engine interface configured to be connected to a compute engine (CE) and an address generator configured to manage address data representing address locations in the storage unit. The data streaming unit further includes a data organization unit configured to access data in the storage unit and to reorganize the data to be forwarded to the compute engine, wherein the memory interface is communicatively connected to the address generator and the data organization unit, wherein the address generator is communicatively connected to the data organization unit, and wherein the data organization unit is communicatively connected to the compute engine interface.
    Type: Application
    Filed: December 3, 2015
    Publication date: June 8, 2017
    Inventors: Ashish Rai Shrivastava, Alan Gatherer, Sushma Wokhlu
  • Publication number: 20170153824
    Abstract: A method, system, and architecture for efficiently accessing data in a memory shared by multiple processor cores that reduces the probability of bank conflicts and decreases latency is provided. In an embodiment, a method for accessing data in a memory includes determining, by a scheduler, a read pattern for reading data from memory to serve requests in a plurality of bank queues, the memory comprising a plurality of memory banks and a plurality of coding banks, the coding banks storing a coded version of at least some of the data stored in the plurality of memory banks; reading a first data from a first memory bank; reading coded data from one of the coding banks; and determining the second data according to the coded data and the first data.
    Type: Application
    Filed: December 1, 2015
    Publication date: June 1, 2017
    Inventors: Hao Luan, Alan Gatherer, Sriram Vishwanath, Casen Hunger, Hardik Jain
  • Publication number: 20170139740
    Abstract: An embodiment method includes receiving, by an intellectual property (IP) block within a computing system, a transaction request and determining, by the IP block, a context corresponding to the transaction request. The method further includes determining, by the IP block, a view of the computing system defined by the context and processing, by the IP block, the transaction request in accordance with the view of the computing system defined by the context.
    Type: Application
    Filed: November 12, 2015
    Publication date: May 18, 2017
    Applicant: FUTUREWEI TECHNOLOGIES, INC.
    Inventors: Lee Dobson McFearin, Alan Gatherer, Yan Bei
  • Publication number: 20170103076
    Abstract: A computation system-on-a-chip (CSoC) includes a first scalable distributed real-time Data Warehousing (sdrDW) engine and a network interface coupled to the first sdrDW engine, where the network interface is coupled to an interconnect, and where the CSoC is configured to transmit a task request over the interconnect to a first networked bulk storage controller (NBSC) requesting that a task be performed on a bulk storage medium.
    Type: Application
    Filed: September 13, 2016
    Publication date: April 13, 2017
    Inventors: Debashis Bhattacharya, Alan Gatherer, Alex Elisa Chandra, Mark Brown, Hao Luan, Ashish Rai Shrivastava
  • Publication number: 20170103093
    Abstract: A method includes receiving, by a real-time Data Warehouse (rDW) from a first task, a first dataset and spreading the first dataset to produce a first plurality of objects, where the first plurality of objects includes a first object and a second object. The method also includes storing the first object in a first location in an rDW data repository and storing the second object in a second location in the rDW data repository.
    Type: Application
    Filed: May 31, 2016
    Publication date: April 13, 2017
    Inventors: Alex Elisa Chandra, Mark Brown, Debashis Bhattacharya, Alan Gatherer
  • Patent number: 9612651
    Abstract: Function resources/memory resources and an associated resource controller configured to assign a first portion of the function resources/memory resources to at least one processing element in response to an access request from the processing element. The resource controller changes a power mode of the first portion of the function resources/memory resources as a function of the first portion assignment, and leaves an unassigned portion of the function resources/memory resources in a power down mode in a self-governing nature. The resource controller enables the processing element to access the first portion of the function resources/memory resources in response to the access request received from the processing element. The function resources/memory resources, resource controllers and one or more processing elements may comprise a system on a chip (SoC).
    Type: Grant
    Filed: January 21, 2015
    Date of Patent: April 4, 2017
    Assignee: FUTUREWEI TECHNOLOGIES, INC.
    Inventors: Hao Luan, Alan Gatherer
  • Publication number: 20170052762
    Abstract: An apparatus is configured to perform a method that includes dividing a complex number into a real portion and an imaginary portion. The method also includes determining a region among a plurality of regions in a matrix based on a magnitude of the real portion and a magnitude of the imaginary portion, wherein each region of the plurality of regions includes three sub-regions. The method further includes determining a sub-region among the three sub-regions of the determined region based on the magnitude of the real portion and the magnitude of the imaginary portion. In addition, the method includes coding the real portion and the imaginary portion of the complex number using a common exponent, wherein the common exponent depends on the determined region and the coding depends on the determined sub-region.
    Type: Application
    Filed: August 20, 2015
    Publication date: February 23, 2017
    Inventor: Alan Gatherer
  • Publication number: 20170031619
    Abstract: A method includes receiving a first request, from a first master core, to access data in one of a plurality of memory banks. It is determined whether an access to the data is stalled by virtue of a second request, from a second master core, to access the data in the one of the plurality of memory banks, the second request currently being serviced. In response to a determination that the access to the requested data is stalled, the first request is serviced by accessing data in one of a plurality of coding banks, each coding bank smaller in size than each memory bank.
    Type: Application
    Filed: July 28, 2015
    Publication date: February 2, 2017
    Inventors: Hao Luan, Alan Gatherer, Sriram Vishwanath, Casen Hunger, Hardik Jain
  • Publication number: 20170031829
    Abstract: Systems and techniques for advance cache allocation are described. A described technique includes selecting a job from a plurality of jobs; selecting a processor core from a plurality of processor cores to execute the selected job; receiving a message which describes future memory accesses that will be generated by the selected job; generating a memory burst request based on the message; performing the memory burst request to load data from a memory to at least a dedicated portion of a cache, the cache corresponding to the selected processor core; and starting the selected job on the selected processor core. The technique can include performing an action indicated by a send message to write one or more values from another dedicated portion of the cache to the memory.
    Type: Application
    Filed: July 28, 2015
    Publication date: February 2, 2017
    Inventors: Sushma Wokhlu, Lee McFearin, Alan Gatherer, Ashish Shrivastava, Peter Yifey Yan
  • Publication number: 20170031689
    Abstract: A system and method for variable lane architecture includes memory blocks located in a memory bank, one or more computing nodes forming a vector instruction pipeline for executing a task, each of the computing nodes located in the memory bank, each of the computing nodes executing a portion of the task independently of other ones of the computing nodes, and a global program controller unit (GPCU) forming a scalar instruction pipeline for executing the task, the GPCU configured to schedule instructions for the task at one or more of the computing nodes, the GPCU further configured to dispatch an address for the memory blocks used by each of the computing nodes to the computing nodes.
    Type: Application
    Filed: July 26, 2016
    Publication date: February 2, 2017
    Inventors: Sushma Wokhlu, Alan Gatherer, Ashish Rai Shrivastava
  • Publication number: 20170031606
    Abstract: Systems and techniques for dynamic coding of memory regions are described. A described technique includes monitoring accesses to a group of memory regions, each region including two or more portions of a group of data banks; detecting a high-access memory region based on whether accesses to a region of the group of memory regions exceeds a threshold; generating coding values of a coding region corresponding to the high-access memory region, the high-access memory region including data values distributed across the group of banks; and storing the coding values of the coding region in a coding bank.
    Type: Application
    Filed: July 28, 2015
    Publication date: February 2, 2017
    Inventors: Hao Luan, Alan Gatherer, Sriram Vishwanath, Casen Hunger, Hardik Jain
  • Publication number: 20170031762
    Abstract: An intelligent code apparatus, method, and computer program are provided for use with memory. In operation, a subset of data stored in a first memory is identified. Such subset of the data stored in the first memory is processed, to generate a code. The code is then stored in a second memory, for use in reconstructing at least a portion of the data.
    Type: Application
    Filed: July 28, 2015
    Publication date: February 2, 2017
    Inventors: Hao Luan, Alan Gatherer, Sriram Vishwanath, Casen Hunger, Hardik Jain
  • Publication number: 20170017412
    Abstract: A controller for a shared memory is disclosed. The controller comprises a transaction scanner configured to scan-in a plurality of transactions to access the shared memory and to divide the transactions into beat-level memory access commands. The controller also comprises a command super-arbiter comprising a plurality of command arbiters corresponding to a plurality of shared memory blocks in the shared memory. The command super-arbiter is configured to access a quality of service for each of the transactions, arbitrate the beat-level memory access commands associated with the transactions based on the quality of service for each of the plurality of transactions, and dispatch the beat-level memory access commands to the shared memory blocks based on results of arbitrating the beat-level memory access commands.
    Type: Application
    Filed: July 13, 2015
    Publication date: January 19, 2017
    Inventors: Hao Luan, Alan Gatherer, Bin Yang
  • Publication number: 20170017394
    Abstract: A data warehouse includes a memory and a controller disposed on a substrate that is associated with a System on Chip (SoC). The controller is operatively coupled to the memory. The controller is configured to receive data from a first intellectual property (IP) block executing on the SoC; store the data in the memory on the substrate; and in response to a trigger condition, output at least a portion of the stored data to the SoC for use by a second IP block. An organization scheme for the stored data in the memory is abstracted with respect to the first and second IP blocks.
    Type: Application
    Filed: July 15, 2015
    Publication date: January 19, 2017
    Inventors: Yan Wang, Alan Gatherer
  • Patent number: 9448617
    Abstract: System and method embodiments are provided for messaging-based System-on-a-chip (SoC) power gating. The embodiments enable fine granularity SoC power gating without introducing significant latency and substantially maximizes SoC power reduction. In an embodiment, a method in a first SoC resource for messaging-based power gating includes receiving at the first SoC resource a wakeup notification message (WNM) from a second SoC resource, wherein the WNM comprises a time at which a result message from the second SoC resource is expected to arrive at the first SoC resource; determining with the first SoC resource a wake-up time according to the time at which the result message from the second SoC resource is expected to arrive at the first SoC resource; setting a wake-up time timer to expire at the wake-up time; and waking up the first SoC resource when the wake-up time timer expires when the first SoC resource is asleep.
    Type: Grant
    Filed: March 11, 2014
    Date of Patent: September 20, 2016
    Assignee: Futurewei Technologies, Inc.
    Inventors: Mark Brown, Mehran Bagheri, Peter Yan, Alan Gatherer
  • Patent number: 9335934
    Abstract: Disclosed herein are a shared memory controller and a method of controlling a shared memory. An embodiment method of controlling a shared memory includes concurrently scanning-in a plurality of read/write commands for respective transactions. Each of the plurality of read/write commands includes respective addresses and respective priorities. Additionally, each of the respective transactions is divisible into at least one beat and at least one of the respective transactions is divisible into multiple beats. The method also includes dividing the plurality of read/write commands into respective beat-level read/write commands and concurrently arbitrating the respective beat-level read/write commands according to the respective addresses and the respective priorities. Concurrently arbitrating yields respective sequences of beat-level read/write commands corresponding to the respective addresses.
    Type: Grant
    Filed: April 29, 2014
    Date of Patent: May 10, 2016
    Assignee: Futurewei Technologies, Inc.
    Inventors: Hao Luan, Alan Gatherer, Yan Bei, Jun Ying
  • Publication number: 20160116971
    Abstract: Function resources/memory resources and an associated resource controller configured to assign a first portion of the function resources/memory resources to at least one processing element in response to an access request from the processing element. The resource controller changes a power mode of the first portion of the function resources/memory resources as a function of the first portion assignment, and leaves an unassigned portion of the function resources/memory resources in a power down mode in a self-governing nature. The resource controller enables the processing element to access the first portion of the function resources/memory resources in response to the access request received from the processing element. The function resources/memory resources, resource controllers and one or more processing elements may comprise a system on a chip (SoC).
    Type: Application
    Filed: January 21, 2015
    Publication date: April 28, 2016
    Inventors: Hao Luan, Alan Gatherer
  • Publication number: 20160103707
    Abstract: A method includes receiving, by a system on a chip (SoC) from a logically centralized controller, configuration information and reading, from a semantics aware storage module of the SoC, a data block in accordance with the configuration information. The method also includes performing scheduling to produce a schedule in accordance with the configuration information and writing the data block to an input data queue in accordance with the schedule to produce a stored data block. Additionally, the method includes writing a tag to an input tag queue to produce a stored tag, where the tag corresponds to the data block.
    Type: Application
    Filed: October 7, 2015
    Publication date: April 14, 2016
    Inventors: Debashis Bhattacharya, Alan Gatherer, Ashish Rai Shrivastava, Mark Brown, Zhenguo Gu, Qiang Wang, Alex Elisa Chandra