Patents by Inventor Alan Gatherer

Alan Gatherer has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180300258
    Abstract: A method of operating a cache memory comprises receiving a first read or write command including at least a first address referring to first data and a first rank indicator associated with the first data, and in response to receiving the first read or write command, reading or writing the first data referenced by the first address, and storing the first rank indicator.
    Type: Application
    Filed: April 13, 2017
    Publication date: October 18, 2018
    Applicant: Futurewei Technologies, Inc.
    Inventors: Sushma Wokhlu, Alex Elisa Chandra, Alan Gatherer
  • Publication number: 20180285290
    Abstract: A distributed and shared memory controller (DSMC) comprises at least one building block. comprising a plurality of switches distributed into a plurality of stages; a plurality of master ports coupled to a first stage of the switches; and a plurality of bank controllers with associated memory banks coupled to a last stage of the switches; wherein each of the switches connects to lower stage switches via internal connections, each of the switches of the first stage connects to at least one of the master ports via master connections and each of the switches of the last stage connects to at least one of the bank controllers via memory connections; wherein each of the switches of the first stage connects to second stage switches of a neighboring building block via outward connections and each of the switches of a second stage connects to first stage switches of the neighboring building block via inward connections.
    Type: Application
    Filed: March 30, 2018
    Publication date: October 4, 2018
    Applicant: Futurewei Technologies, Inc.
    Inventors: Hao Luan, Alan Gatherer, Xi Chen, Fang Yu, Yichuan Yu, Bin Yang, Wei Chen
  • Patent number: 10042773
    Abstract: Systems and techniques for advance cache allocation are described. A described technique includes selecting a job from a plurality of jobs; selecting a processor core from a plurality of processor cores to execute the selected job; receiving a message which describes future memory accesses that will be generated by the selected job; generating a memory burst request based on the message; performing the memory burst request to load data from a memory to at least a dedicated portion of a cache, the cache corresponding to the selected processor core; and starting the selected job on the selected processor core. The technique can include performing an action indicated by a send message to write one or more values from another dedicated portion of the cache to the memory.
    Type: Grant
    Filed: July 28, 2015
    Date of Patent: August 7, 2018
    Assignee: FUTUREWEI TECHNOLOGIES, INC.
    Inventors: Sushma Wokhlu, Lee McFearin, Alan Gatherer, Ashish Shrivastava, Peter Yifey Yan
  • Patent number: 9983995
    Abstract: A cache and a method for operating a cache are disclosed. In an embodiment, the cache includes a cache controller, data cache and a delay write through cache (DWTC), wherein the data cache is separate and distinct from the DWTC, wherein cacheable write accesses are split into shareable cacheable write accesses and non-shareable cacheable write accesses, wherein the cacheable shareable write accesses are allocated only to the DWTC, and wherein the non-shareable cacheable write accesses are not allocated to the DWTC.
    Type: Grant
    Filed: April 18, 2016
    Date of Patent: May 29, 2018
    Assignee: Futurewei Technologies, Inc.
    Inventors: Sushma Wokhlu, Alan Gatherer, Ashish Rai Shrivastava
  • Publication number: 20180091260
    Abstract: A method includes determining an error vector magnitude for analog signals received by multiple antennas in an array of antennas of a base station, assigning quantization bits to a plurality of analog-to-digital converters (ADCs) of the base station such that some ADCs have different numbers of quantization bits allocated from a fixed total number of available quantization bits of the base station, and applying the analog signals to the ADCs with quantization bits assigned to reduce the error vector magnitude of the analog signals.
    Type: Application
    Filed: September 29, 2017
    Publication date: March 29, 2018
    Inventors: Alan Gatherer, Jinseok Choi
  • Patent number: 9921754
    Abstract: Systems and techniques for dynamic coding of memory regions are described. A described technique includes monitoring accesses to a group of memory regions, each region including two or more portions of a group of data banks; detecting a high-access memory region based on whether accesses to a region of the group of memory regions exceeds a threshold; generating coding values of a coding region corresponding to the high-access memory region, the high-access memory region including data values distributed across the group of banks; and storing the coding values of the coding region in a coding bank.
    Type: Grant
    Filed: July 28, 2015
    Date of Patent: March 20, 2018
    Assignee: Futurewei Technologies, Inc.
    Inventors: Hao Luan, Alan Gatherer, Sriram Vishwanath, Casen Hunger, Hardik Jain
  • Publication number: 20170371570
    Abstract: It is possible to reduce the latency attributable to memory protection in shared memory systems by performing access protection at a central Data Ownership Manager (DOM), rather than at distributed memory management units in the central processing unit (CPU) elements (CEs) responsible for parallel thread processing. In particular, the DOM may monitor read requests communicated over a data plane between the CEs and a memory controller, and perform access protection verification in parallel with the memory controller's generation of the data response. The DOM may be separate and distinct from both the CEs and the memory controller, and therefore may generally be able to make the access determination without interfering with data plane processing/generation of the read requests and data responses exchanged between the memory controller and the CEs.
    Type: Application
    Filed: June 24, 2016
    Publication date: December 28, 2017
    Inventors: Sushma Wokhlu, Lee Dobson Mcfearin, Alan Gatherer, Hao Luan
  • Publication number: 20170300414
    Abstract: A cache and a method for operating a cache are disclosed. In an embodiment, the cache includes a cache controller, data cache and a delay write through cache (DWTC), wherein the data cache is separate and distinct from the DWTC, wherein cacheable write accesses are split into shareable cacheable write accesses and non-shareable cacheable write accesses, wherein the cacheable shareable write accesses are allocated only to the DWTC, and wherein the non-shareable cacheable write accesses are not allocated to the DWTC.
    Type: Application
    Filed: April 18, 2016
    Publication date: October 19, 2017
    Inventors: Sushma Wokhlu, Alan Gatherer, Ashish Rai Shrivastava
  • Publication number: 20170293586
    Abstract: Disclosed is method for operating an interposer that includes assigning a binary port weight to a plurality of input ports of the interposer. The sum of all of the port weights is less than or equal to a number of traversals available to the interposer in a cycle. A traversal counter is set zero at the beginning of each cycle. The output of the traversal counter is a binary number of m bits. A mask is generated when a bit of the traversal counter transitions from a zero to a one. The mask is generated having the m?k+1 bit of the mask equal to one and all other bits of the mask equal to zero. Data is transmitted from each port when both the binary port weight and the mask have a one in the same bit position.
    Type: Application
    Filed: April 12, 2016
    Publication date: October 12, 2017
    Inventors: Peter Yan, Alex Elisa Chandra, Lee Dobson McFearin, Fang Yu, Alan Gatherer
  • Publication number: 20170293587
    Abstract: A described embodiment of the present invention includes a network having a first, second an d third plurality of routers connected to a plurality of endpoints. At least one of the first plurality of routers includes a plurality of interposers having a number of queues. The at least one of the first plurality of routers has a demultiplexer for each interposer configured to receive multiplexed data from the interposer and provide demultiplexed data on to a plurality of second queues corresponding to the first queues of the number of queues. The at least one of the first plurality of routers also includes a number multiplexers, each of the number multiplexers having inputs configured to receive data from the number of queues.
    Type: Application
    Filed: April 12, 2016
    Publication date: October 12, 2017
    Inventors: Peter Yan, Alex Elisa Chandra, YwhPyng Harn, Xiaotao Chen, Alan Gatherer, Fang Yu, Xingfeng Chen, Zhuolei Wang, Yang Zhou
  • Publication number: 20170293512
    Abstract: Methods and apparatus for inter-process communication are provided. A circuit may have a plurality of clusters, and at least one cluster may have a computation element (CE), a memory operatively coupled with the CE, and an autonomic transport system (ATS) block operatively coupled with the CE and the memory. The ATS block may be configured to perform inter-process communication (IPC) for the at least one cluster. In one embodiment, the ATS block may transfer a message to a different cluster based on a request from the CE. In another embodiment, the ATS block may receive a message by allocating a buffer in the memory and write the message into the buffer. The ATS block may also be configured to manage synchronization and schedule tasks for the CE.
    Type: Application
    Filed: April 12, 2016
    Publication date: October 12, 2017
    Inventors: Peter Yan, Alan Gatherer, Alex Elisa Chandra, Lee Dobson Mcfearin, Mark Brown, Debashis Bhattacharya, Fang Yu, Xingfeng Chen, Yan Bei, Ke Ning, Chushun Huang, Tong Sun, Xiaotao Chen
  • Patent number: 9760432
    Abstract: An intelligent code apparatus, method, and computer program are provided for use with memory. In operation, a subset of data stored in a first memory is identified. Such subset of the data stored in the first memory is processed, to generate a code. The code is then stored in a second memory, for use in reconstructing at least a portion of the data.
    Type: Grant
    Filed: July 28, 2015
    Date of Patent: September 12, 2017
    Assignee: Futurewei Technologies, Inc.
    Inventors: Hao Luan, Alan Gatherer, Sriram Vishwanath, Casen Hunger, Hardik Jain
  • Patent number: 9740513
    Abstract: A system includes a plurality of compute modules and a first processor configured to implement a virtualization layer, where the virtualization layer is configured to support real time jobs. The system also includes a hardware support layer coupled between the plurality of compute modules and the virtualization layer, where the hardware support layer is configured to provide an interface between the virtualization layer and the plurality of compute modules.
    Type: Grant
    Filed: June 4, 2015
    Date of Patent: August 22, 2017
    Assignee: FUTUREWEI TECHNOLOGIES, INC.
    Inventors: Alan Gatherer, Debashis Bhattacharya, Anthony C. K. Soong
  • Publication number: 20170206173
    Abstract: The present disclosure relates to a system and method of managing operation of a cache memory. The system and method assign each nested task a level, and each task within a nested level an instance. Using the assigned task levels and instances, the cache management module is able to determine which cache entries to evict from cache when space is needed, and which evicted cache entries to recover upon completion of preempting tasks.
    Type: Application
    Filed: January 15, 2016
    Publication date: July 20, 2017
    Inventors: Lee McFearin, Sushma Wokhlu, Alan Gatherer
  • Publication number: 20170169034
    Abstract: A data warehouse engine (DWE) includes a central processing unit (CPU) core and a first data organization unit (DOU), where the first DOU is configured to aggregate read operations. The DWE also includes a first command queue coupled between the CPU core and the first DOU, where the first command queue is configured to convey commands from the CPU core to the first DOU.
    Type: Application
    Filed: November 30, 2016
    Publication date: June 15, 2017
    Inventors: Ashish Rai Shrivastava, Alex Elisa Chandra, Mark Brown, Debashis Bhattacharya, Alan Gatherer
  • Publication number: 20170168792
    Abstract: A method includes obtaining, by a first processor, a first software architecture description file and obtaining, by the first processor, a platform independent model file. The method also includes obtaining, by the first processor, a platform architecture definition file and performing, by the first processor, a first source-to-source compilation in accordance with the first software architecture description file, the platform independent model file, and the platform architecture definition file, to produce generated interface code. Additionally, the method includes generating, by the first processor, run time code, in accordance with the generated interface code and running, by a second processor in real time, the run time code.
    Type: Application
    Filed: December 14, 2016
    Publication date: June 15, 2017
    Inventors: Debashis Bhattacharya, Alan Gatherer, Mark Brown, Lee Dobson McFearin, Alex Elisa Chandra, Ashish Rai Shrivastava
  • Publication number: 20170163698
    Abstract: A data streaming unit (DSU) and a method for operating a DSU are disclosed. In an embodiment the DSU includes a memory interface configured to be connected to a storage unit, a compute engine interface configured to be connected to a compute engine (CE) and an address generator configured to manage address data representing address locations in the storage unit. The data streaming unit further includes a data organization unit configured to access data in the storage unit and to reorganize the data to be forwarded to the compute engine, wherein the memory interface is communicatively connected to the address generator and the data organization unit, wherein the address generator is communicatively connected to the data organization unit, and wherein the data organization unit is communicatively connected to the compute engine interface.
    Type: Application
    Filed: December 3, 2015
    Publication date: June 8, 2017
    Inventors: Ashish Rai Shrivastava, Alan Gatherer, Sushma Wokhlu
  • Publication number: 20170153824
    Abstract: A method, system, and architecture for efficiently accessing data in a memory shared by multiple processor cores that reduces the probability of bank conflicts and decreases latency is provided. In an embodiment, a method for accessing data in a memory includes determining, by a scheduler, a read pattern for reading data from memory to serve requests in a plurality of bank queues, the memory comprising a plurality of memory banks and a plurality of coding banks, the coding banks storing a coded version of at least some of the data stored in the plurality of memory banks; reading a first data from a first memory bank; reading coded data from one of the coding banks; and determining the second data according to the coded data and the first data.
    Type: Application
    Filed: December 1, 2015
    Publication date: June 1, 2017
    Inventors: Hao Luan, Alan Gatherer, Sriram Vishwanath, Casen Hunger, Hardik Jain
  • Publication number: 20170139740
    Abstract: An embodiment method includes receiving, by an intellectual property (IP) block within a computing system, a transaction request and determining, by the IP block, a context corresponding to the transaction request. The method further includes determining, by the IP block, a view of the computing system defined by the context and processing, by the IP block, the transaction request in accordance with the view of the computing system defined by the context.
    Type: Application
    Filed: November 12, 2015
    Publication date: May 18, 2017
    Applicant: FUTUREWEI TECHNOLOGIES, INC.
    Inventors: Lee Dobson McFearin, Alan Gatherer, Yan Bei
  • Publication number: 20170103076
    Abstract: A computation system-on-a-chip (CSoC) includes a first scalable distributed real-time Data Warehousing (sdrDW) engine and a network interface coupled to the first sdrDW engine, where the network interface is coupled to an interconnect, and where the CSoC is configured to transmit a task request over the interconnect to a first networked bulk storage controller (NBSC) requesting that a task be performed on a bulk storage medium.
    Type: Application
    Filed: September 13, 2016
    Publication date: April 13, 2017
    Inventors: Debashis Bhattacharya, Alan Gatherer, Alex Elisa Chandra, Mark Brown, Hao Luan, Ashish Rai Shrivastava