Patents by Inventor Alan Gatherer

Alan Gatherer has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10884756
    Abstract: A system and method for variable lane architecture includes memory blocks located in a memory bank, one or more computing nodes forming a vector instruction pipeline for executing a task, each of the computing nodes located in the memory bank, each of the computing nodes executing a portion of the task independently of other ones of the computing nodes, and a global program controller unit (GPCU) forming a scalar instruction pipeline for executing the task, the GPCU configured to schedule instructions for the task at one or more of the computing nodes, the GPCU further configured to dispatch an address for the memory blocks used by each of the computing nodes to the computing nodes.
    Type: Grant
    Filed: May 18, 2020
    Date of Patent: January 5, 2021
    Assignee: Futurewei Technologies, Inc.
    Inventors: Sushma Wokhlu, Alan Gatherer, Ashish Rai Shrivastava
  • Patent number: 10817528
    Abstract: A data warehouse engine (DWE) includes a central processing unit (CPU) core and a first data organization unit (DOU), where the first DOU is configured to aggregate read operations. The DWE also includes a first command queue coupled between the CPU core and the first DOU, where the first command queue is configured to convey commands from the CPU core to the first DOU.
    Type: Grant
    Filed: November 30, 2016
    Date of Patent: October 27, 2020
    Assignee: Futurewei Technologies, Inc.
    Inventors: Ashish Rai Shrivastava, Alex Elisa Chandra, Mark Brown, Debashis Bhattacharya, Alan Gatherer
  • Patent number: 10783160
    Abstract: A computation system-on-a-chip (CSoC) includes a first scalable distributed real-time Data Warehousing (sdrDW) engine and a network interface coupled to the first sdrDW engine, where the network interface is coupled to an interconnect, and where the CSoC is configured to transmit a task request over the interconnect to a first networked bulk storage controller (NBSC) requesting that a task be performed on a bulk storage medium.
    Type: Grant
    Filed: September 13, 2016
    Date of Patent: September 22, 2020
    Assignee: Futurewei Technologies, Inc.
    Inventors: Debashis Bhattacharya, Alan Gatherer, Alex Elisa Chandra, Mark Brown, Hao Luan, Ashish Rai Shrivastava
  • Patent number: 10769080
    Abstract: A distributed and shared memory controller (DSMC) comprises at least one building block. comprising a plurality of switches distributed into a plurality of stages; a plurality of master ports coupled to a first stage of the switches; and a plurality of bank controllers with associated memory banks coupled to a last stage of the switches; wherein each of the switches connects to lower stage switches via internal connections, each of the switches of the first stage connects to at least one of the master ports via master connections and each of the switches of the last stage connects to at least one of the bank controllers via memory connections; wherein each of the switches of the first stage connects to second stage switches of a neighboring building block via outward connections and each of the switches of a second stage connects to first stage switches of the neighboring building block via inward connections.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: September 8, 2020
    Assignee: Futurewei Technologies, Inc.
    Inventors: Hao Luan, Alan Gatherer, Xi Chen, Fang Yu, Yichuan Yu, Bin Yang, Wei Chen
  • Publication number: 20200278869
    Abstract: A system and method for variable lane architecture includes memory blocks located in a memory bank, one or more computing nodes forming a vector instruction pipeline for executing a task, each of the computing nodes located in the memory bank, each of the computing nodes executing a portion of the task independently of other ones of the computing nodes, and a global program controller unit (GPCU) forming a scalar instruction pipeline for executing the task, the GPCU configured to schedule instructions for the task at one or more of the computing nodes, the GPCU further configured to dispatch an address for the memory blocks used by each of the computing nodes to the computing nodes.
    Type: Application
    Filed: May 18, 2020
    Publication date: September 3, 2020
    Inventors: Sushma Wokhlu, Alan Gatherer, Ashish Rai Shrivastava
  • Patent number: 10691463
    Abstract: A system and method for variable lane architecture includes memory blocks located in a memory bank, one or more computing nodes forming a vector instruction pipeline for executing a task, each of the computing nodes located in the memory bank, each of the computing nodes executing a portion of the task independently of other ones of the computing nodes, and a global program controller unit (GPCU) forming a scalar instruction pipeline for executing the task, the GPCU configured to schedule instructions for the task at one or more of the computing nodes, the GPCU further configured to dispatch an address for the memory blocks used by each of the computing nodes to the computing nodes.
    Type: Grant
    Filed: July 26, 2016
    Date of Patent: June 23, 2020
    Assignee: Futurewei Technologies, Inc.
    Inventors: Sushma Wokhlu, Alan Gatherer, Ashish Rai Shrivastava
  • Publication number: 20200050376
    Abstract: It is possible to reduce the latency attributable to memory protection in shared memory systems by performing access protection at a central Data Ownership Manager (DOM), rather than at distributed memory management units in the central processing unit (CPU) elements (CEs) responsible for parallel thread processing. In particular, the DOM may monitor read requests communicated over a data plane between the CEs and a memory controller, and perform access protection verification in parallel with the memory controller's generation of the data response. The DOM may be separate and distinct from both the CEs and the memory controller, and therefore may generally be able to make the access determination without interfering with data plane processing/generation of the read requests and data responses exchanged between the memory controller and the CEs.
    Type: Application
    Filed: October 21, 2019
    Publication date: February 13, 2020
    Inventors: Sushma Wokhlu, Lee Dobson McFearin, Alan Gatherer, Hao Luan
  • Patent number: 10554335
    Abstract: A method includes determining an error vector magnitude for analog signals received by multiple antennas in an array of antennas of a base station, assigning quantization bits to a plurality of analog-to-digital converters (ADCs) of the base station such that some ADCs have different numbers of quantization bits allocated from a fixed total number of available quantization bits of the base station, and applying the analog signals to the ADCs with quantization bits assigned to reduce the error vector magnitude of the analog signals.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: February 4, 2020
    Assignee: Futurewei Technologies, Inc.
    Inventors: Alan Gatherer, Jinseok Choi
  • Patent number: 10496622
    Abstract: A method includes receiving, by a real-time Data Warehouse (rDW) from a first task, a first dataset and spreading the first dataset to produce a first plurality of objects, where the first plurality of objects includes a first object and a second object. The method also includes storing the first object in a first location in an rDW data repository and storing the second object in a second location in the rDW data repository.
    Type: Grant
    Filed: May 31, 2016
    Date of Patent: December 3, 2019
    Assignee: Futurewei Technologies, Inc.
    Inventors: Alex Elisa Chandra, Mark Brown, Debashis Bhattacharya, Alan Gatherer
  • Patent number: 10452287
    Abstract: It is possible to reduce the latency attributable to memory protection in shared memory systems by performing access protection at a central Data Ownership Manager (DOM), rather than at distributed memory management units in the central processing unit (CPU) elements (CEs) responsible for parallel thread processing. In particular, the DOM may monitor read requests communicated over a data plane between the CEs and a memory controller, and perform access protection verification in parallel with the memory controller's generation of the data response. The DOM may be separate and distinct from both the CEs and the memory controller, and therefore may generally be able to make the access determination without interfering with data plane processing/generation of the read requests and data responses exchanged between the memory controller and the CEs.
    Type: Grant
    Filed: June 24, 2016
    Date of Patent: October 22, 2019
    Assignee: FUTUREWEI TECHNOLOGIES, INC.
    Inventors: Sushma Wokhlu, Lee Dobson Mcfearin, Alan Gatherer, Hao Luan
  • Patent number: 10437480
    Abstract: A method, system, and architecture for efficiently accessing data in a memory shared by multiple processor cores that reduces the probability of bank conflicts and decreases latency is provided. In an embodiment, a method for accessing data in a memory includes determining, by a scheduler, a read pattern for reading data from memory to serve requests in a plurality of bank queues, the memory comprising a plurality of memory banks and a plurality of coding banks, the coding banks storing a coded version of at least some of the data stored in the plurality of memory banks; reading a first data from a first memory bank; reading coded data from one of the coding banks; and determining the second data according to the coded data and the first data.
    Type: Grant
    Filed: December 1, 2015
    Date of Patent: October 8, 2019
    Assignee: Futurewei Technologies, Inc.
    Inventors: Hao Luan, Alan Gatherer, Sriram Vishwanath, Casen Hunger, Hardik Jain
  • Patent number: 10419501
    Abstract: A data streaming unit (DSU) and a method for operating a DSU are disclosed. In an embodiment the DSU includes a memory interface configured to be connected to a storage unit, a compute engine interface configured to be connected to a compute engine (CE) and an address generator configured to manage address data representing address locations in the storage unit. The data streaming unit further includes a data organization unit configured to access data in the storage unit and to reorganize the data to be forwarded to the compute engine, wherein the memory interface is communicatively connected to the address generator and the data organization unit, wherein the address generator is communicatively connected to the data organization unit, and wherein the data organization unit is communicatively connected to the compute engine interface.
    Type: Grant
    Filed: December 3, 2015
    Date of Patent: September 17, 2019
    Assignee: Futurewei Technologies, Inc.
    Inventors: Ashish Rai Shrivastava, Alan Gatherer, Sushma Wokhlu
  • Patent number: 10387355
    Abstract: Disclosed is method for operating an interposer that includes assigning a binary port weight to a plurality of input ports of the interposer. The sum of all of the port weights is less than or equal to a number of traversals available to the interposer in a cycle. A traversal counter is set zero at the beginning of each cycle. The output of the traversal counter is a binary number of m bits. A mask is generated when a bit of the traversal counter transitions from a zero to a one. The mask is generated having the m?k+1 bit of the mask equal to one and all other bits of the mask equal to zero. Data is transmitted from each port when both the binary port weight and the mask have a one in the same bit position.
    Type: Grant
    Filed: April 12, 2016
    Date of Patent: August 20, 2019
    Assignee: FUTUREWEI TECHNOLOGIES, INC.
    Inventors: Peter Yan, Alex Elisa Chandra, Lee Dobson McFearin, Fang Yu, Alan Gatherer
  • Patent number: 10366013
    Abstract: The present disclosure relates to a system and method of managing operation of a cache memory. The system and method assign each nested task a level, and each task within a nested level an instance. Using the assigned task levels and instances, the cache management module is able to determine which cache entries to evict from cache when space is needed, and which evicted cache entries to recover upon completion of preempting tasks.
    Type: Grant
    Filed: January 15, 2016
    Date of Patent: July 30, 2019
    Assignee: Futurewei Technologies, Inc.
    Inventors: Lee McFearin, Sushma Wokhlu, Alan Gatherer
  • Patent number: 10353747
    Abstract: A controller for a shared memory is disclosed. The controller comprises a transaction scanner configured to scan-in a plurality of transactions to access the shared memory and to divide the transactions into beat-level memory access commands. The controller also comprises a command super-arbiter comprising a plurality of command arbiters corresponding to a plurality of shared memory blocks in the shared memory. The command super-arbiter is configured to access a quality of service for each of the transactions, arbitrate the beat-level memory access commands associated with the transactions based on the quality of service for each of the plurality of transactions, and dispatch the beat-level memory access commands to the shared memory blocks based on results of arbitrating the beat-level memory access commands.
    Type: Grant
    Filed: July 13, 2015
    Date of Patent: July 16, 2019
    Assignee: Futurewei Technologies, Inc.
    Inventors: Hao Luan, Alan Gatherer, Bin Yang
  • Patent number: 10296398
    Abstract: A controller for a shared memory is disclosed. The controller comprises a transaction scanner configured to scan-in a plurality of transactions to access the shared memory and to divide the transactions into beat-level memory access commands. The controller also comprises a command super-arbiter comprising a plurality of command arbiters corresponding to a plurality of shared memory blocks in the shared memory. The command super-arbiter is configured to access a quality of service for each of the transactions, arbitrate the beat-level memory access commands associated with the transactions based on the quality of service for each of the plurality of transactions, and dispatch the beat-level memory access commands to the shared memory blocks based on results of arbitrating the beat-level memory access commands.
    Type: Grant
    Filed: July 13, 2015
    Date of Patent: May 21, 2019
    Assignee: Futurewei Technologies, Inc.
    Inventors: Hao Luan, Alan Gatherer, Bin Yang
  • Patent number: 10289598
    Abstract: A described embodiment of the present invention includes a network having a first, second and third plurality of routers connected to a plurality of endpoints. At least one of the first plurality of routers includes a plurality of interposers having a number of queues. The at least one of the first plurality of routers has a demultiplexer for each interposer configured to receive multiplexed data from the interposer and provide demultiplexed data on to a plurality of second queues corresponding to the first queues of the number of queues. The at least one of the first plurality of routers also includes a number multiplexers, each of the number multiplexers having inputs configured to receive data from the number of queues.
    Type: Grant
    Filed: April 12, 2016
    Date of Patent: May 14, 2019
    Assignee: Futurewei Technologies, Inc.
    Inventors: Peter Yan, Alex Elisa Chandra, YwhPyng Harn, Xiaotao Chen, Alan Gatherer, Fang Yu, Xingfeng Chen, Zhuolei Wang, Yang Zhou
  • Patent number: 10185606
    Abstract: Methods and apparatus for inter-process communication are provided. A circuit may have a plurality of clusters, and at least one cluster may have a computation element (CE), a memory operatively coupled with the CE, and an autonomic transport system (ATS) block operatively coupled with the CE and the memory. The ATS block may be configured to perform inter-process communication (IPC) for the at least one cluster. In one embodiment, the ATS block may transfer a message to a different cluster based on a request from the CE. In another embodiment, the ATS block may receive a message by allocating a buffer in the memory and write the message into the buffer. The ATS block may also be configured to manage synchronization and schedule tasks for the CE.
    Type: Grant
    Filed: April 12, 2016
    Date of Patent: January 22, 2019
    Assignee: Futurewei Technologies, Inc.
    Inventors: Peter Yan, Alan Gatherer, Alex Elisa Chandra, Lee Dobson Mcfearin, Mark Brown, Debashis Bhattacharya, Fang Yu, Xingfeng Chen, Yan Bei, Ke Ning, Chushun Huang, Tong Sun, Xiaotao Chen
  • Patent number: 10180803
    Abstract: A method includes receiving a first request, from a first master core, to access data in one of a plurality of memory banks. It is determined whether an access to the data is stalled by virtue of a second request, from a second master core, to access the data in the one of the plurality of memory banks, the second request currently being serviced. In response to a determination that the access to the requested data is stalled, the first request is serviced by accessing data in one of a plurality of coding banks, each coding bank smaller in size than each memory bank.
    Type: Grant
    Filed: July 28, 2015
    Date of Patent: January 15, 2019
    Assignee: Futurewei Technologies, Inc.
    Inventors: Hao Luan, Alan Gatherer, Sriram Vishwanath, Casen Hunger, Hardik Jain
  • Publication number: 20180321939
    Abstract: Technology for providing data to a processing unit is disclosed. A computer processor may be divided into a master processing unit and consumer processing units. The master processing unit at least partially decodes a machine instruction and determines whether data is needed to execute the machine instruction. The master processing unit sends a request to memory for the data. The request may indicate that the data is to be sent from the memory to a consumer processing unit. The data sent by the memory in response to the request may be stored in local read storage that is close to the consumer processing unit for fast access. The master processing unit may also provide the machine instruction to the consumer processing unit. The consumer processing unit may access the data from the local read storage and execute the machine instruction based on the accessed data.
    Type: Application
    Filed: May 4, 2017
    Publication date: November 8, 2018
    Applicant: Futurewei Technologies, Inc.
    Inventors: Alan Gatherer, Sushma Wokhlu, Peter Yan, Ywhpyng Harn, Ashish Rai Shrivastava, Tong Sun, Lee Dobson McFearin