Patents by Inventor Alex Elisa Chandra

Alex Elisa Chandra has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10817528
    Abstract: A data warehouse engine (DWE) includes a central processing unit (CPU) core and a first data organization unit (DOU), where the first DOU is configured to aggregate read operations. The DWE also includes a first command queue coupled between the CPU core and the first DOU, where the first command queue is configured to convey commands from the CPU core to the first DOU.
    Type: Grant
    Filed: November 30, 2016
    Date of Patent: October 27, 2020
    Assignee: Futurewei Technologies, Inc.
    Inventors: Ashish Rai Shrivastava, Alex Elisa Chandra, Mark Brown, Debashis Bhattacharya, Alan Gatherer
  • Patent number: 10783160
    Abstract: A computation system-on-a-chip (CSoC) includes a first scalable distributed real-time Data Warehousing (sdrDW) engine and a network interface coupled to the first sdrDW engine, where the network interface is coupled to an interconnect, and where the CSoC is configured to transmit a task request over the interconnect to a first networked bulk storage controller (NBSC) requesting that a task be performed on a bulk storage medium.
    Type: Grant
    Filed: September 13, 2016
    Date of Patent: September 22, 2020
    Assignee: Futurewei Technologies, Inc.
    Inventors: Debashis Bhattacharya, Alan Gatherer, Alex Elisa Chandra, Mark Brown, Hao Luan, Ashish Rai Shrivastava
  • Patent number: 10496622
    Abstract: A method includes receiving, by a real-time Data Warehouse (rDW) from a first task, a first dataset and spreading the first dataset to produce a first plurality of objects, where the first plurality of objects includes a first object and a second object. The method also includes storing the first object in a first location in an rDW data repository and storing the second object in a second location in the rDW data repository.
    Type: Grant
    Filed: May 31, 2016
    Date of Patent: December 3, 2019
    Assignee: Futurewei Technologies, Inc.
    Inventors: Alex Elisa Chandra, Mark Brown, Debashis Bhattacharya, Alan Gatherer
  • Patent number: 10387355
    Abstract: Disclosed is method for operating an interposer that includes assigning a binary port weight to a plurality of input ports of the interposer. The sum of all of the port weights is less than or equal to a number of traversals available to the interposer in a cycle. A traversal counter is set zero at the beginning of each cycle. The output of the traversal counter is a binary number of m bits. A mask is generated when a bit of the traversal counter transitions from a zero to a one. The mask is generated having the m?k+1 bit of the mask equal to one and all other bits of the mask equal to zero. Data is transmitted from each port when both the binary port weight and the mask have a one in the same bit position.
    Type: Grant
    Filed: April 12, 2016
    Date of Patent: August 20, 2019
    Assignee: FUTUREWEI TECHNOLOGIES, INC.
    Inventors: Peter Yan, Alex Elisa Chandra, Lee Dobson McFearin, Fang Yu, Alan Gatherer
  • Patent number: 10289598
    Abstract: A described embodiment of the present invention includes a network having a first, second and third plurality of routers connected to a plurality of endpoints. At least one of the first plurality of routers includes a plurality of interposers having a number of queues. The at least one of the first plurality of routers has a demultiplexer for each interposer configured to receive multiplexed data from the interposer and provide demultiplexed data on to a plurality of second queues corresponding to the first queues of the number of queues. The at least one of the first plurality of routers also includes a number multiplexers, each of the number multiplexers having inputs configured to receive data from the number of queues.
    Type: Grant
    Filed: April 12, 2016
    Date of Patent: May 14, 2019
    Assignee: Futurewei Technologies, Inc.
    Inventors: Peter Yan, Alex Elisa Chandra, YwhPyng Harn, Xiaotao Chen, Alan Gatherer, Fang Yu, Xingfeng Chen, Zhuolei Wang, Yang Zhou
  • Patent number: 10185606
    Abstract: Methods and apparatus for inter-process communication are provided. A circuit may have a plurality of clusters, and at least one cluster may have a computation element (CE), a memory operatively coupled with the CE, and an autonomic transport system (ATS) block operatively coupled with the CE and the memory. The ATS block may be configured to perform inter-process communication (IPC) for the at least one cluster. In one embodiment, the ATS block may transfer a message to a different cluster based on a request from the CE. In another embodiment, the ATS block may receive a message by allocating a buffer in the memory and write the message into the buffer. The ATS block may also be configured to manage synchronization and schedule tasks for the CE.
    Type: Grant
    Filed: April 12, 2016
    Date of Patent: January 22, 2019
    Assignee: Futurewei Technologies, Inc.
    Inventors: Peter Yan, Alan Gatherer, Alex Elisa Chandra, Lee Dobson Mcfearin, Mark Brown, Debashis Bhattacharya, Fang Yu, Xingfeng Chen, Yan Bei, Ke Ning, Chushun Huang, Tong Sun, Xiaotao Chen
  • Publication number: 20180300258
    Abstract: A method of operating a cache memory comprises receiving a first read or write command including at least a first address referring to first data and a first rank indicator associated with the first data, and in response to receiving the first read or write command, reading or writing the first data referenced by the first address, and storing the first rank indicator.
    Type: Application
    Filed: April 13, 2017
    Publication date: October 18, 2018
    Applicant: Futurewei Technologies, Inc.
    Inventors: Sushma Wokhlu, Alex Elisa Chandra, Alan Gatherer
  • Publication number: 20170293587
    Abstract: A described embodiment of the present invention includes a network having a first, second an d third plurality of routers connected to a plurality of endpoints. At least one of the first plurality of routers includes a plurality of interposers having a number of queues. The at least one of the first plurality of routers has a demultiplexer for each interposer configured to receive multiplexed data from the interposer and provide demultiplexed data on to a plurality of second queues corresponding to the first queues of the number of queues. The at least one of the first plurality of routers also includes a number multiplexers, each of the number multiplexers having inputs configured to receive data from the number of queues.
    Type: Application
    Filed: April 12, 2016
    Publication date: October 12, 2017
    Inventors: Peter Yan, Alex Elisa Chandra, YwhPyng Harn, Xiaotao Chen, Alan Gatherer, Fang Yu, Xingfeng Chen, Zhuolei Wang, Yang Zhou
  • Publication number: 20170293586
    Abstract: Disclosed is method for operating an interposer that includes assigning a binary port weight to a plurality of input ports of the interposer. The sum of all of the port weights is less than or equal to a number of traversals available to the interposer in a cycle. A traversal counter is set zero at the beginning of each cycle. The output of the traversal counter is a binary number of m bits. A mask is generated when a bit of the traversal counter transitions from a zero to a one. The mask is generated having the m?k+1 bit of the mask equal to one and all other bits of the mask equal to zero. Data is transmitted from each port when both the binary port weight and the mask have a one in the same bit position.
    Type: Application
    Filed: April 12, 2016
    Publication date: October 12, 2017
    Inventors: Peter Yan, Alex Elisa Chandra, Lee Dobson McFearin, Fang Yu, Alan Gatherer
  • Publication number: 20170293512
    Abstract: Methods and apparatus for inter-process communication are provided. A circuit may have a plurality of clusters, and at least one cluster may have a computation element (CE), a memory operatively coupled with the CE, and an autonomic transport system (ATS) block operatively coupled with the CE and the memory. The ATS block may be configured to perform inter-process communication (IPC) for the at least one cluster. In one embodiment, the ATS block may transfer a message to a different cluster based on a request from the CE. In another embodiment, the ATS block may receive a message by allocating a buffer in the memory and write the message into the buffer. The ATS block may also be configured to manage synchronization and schedule tasks for the CE.
    Type: Application
    Filed: April 12, 2016
    Publication date: October 12, 2017
    Inventors: Peter Yan, Alan Gatherer, Alex Elisa Chandra, Lee Dobson Mcfearin, Mark Brown, Debashis Bhattacharya, Fang Yu, Xingfeng Chen, Yan Bei, Ke Ning, Chushun Huang, Tong Sun, Xiaotao Chen
  • Publication number: 20170169034
    Abstract: A data warehouse engine (DWE) includes a central processing unit (CPU) core and a first data organization unit (DOU), where the first DOU is configured to aggregate read operations. The DWE also includes a first command queue coupled between the CPU core and the first DOU, where the first command queue is configured to convey commands from the CPU core to the first DOU.
    Type: Application
    Filed: November 30, 2016
    Publication date: June 15, 2017
    Inventors: Ashish Rai Shrivastava, Alex Elisa Chandra, Mark Brown, Debashis Bhattacharya, Alan Gatherer
  • Publication number: 20170168792
    Abstract: A method includes obtaining, by a first processor, a first software architecture description file and obtaining, by the first processor, a platform independent model file. The method also includes obtaining, by the first processor, a platform architecture definition file and performing, by the first processor, a first source-to-source compilation in accordance with the first software architecture description file, the platform independent model file, and the platform architecture definition file, to produce generated interface code. Additionally, the method includes generating, by the first processor, run time code, in accordance with the generated interface code and running, by a second processor in real time, the run time code.
    Type: Application
    Filed: December 14, 2016
    Publication date: June 15, 2017
    Inventors: Debashis Bhattacharya, Alan Gatherer, Mark Brown, Lee Dobson McFearin, Alex Elisa Chandra, Ashish Rai Shrivastava
  • Publication number: 20170103076
    Abstract: A computation system-on-a-chip (CSoC) includes a first scalable distributed real-time Data Warehousing (sdrDW) engine and a network interface coupled to the first sdrDW engine, where the network interface is coupled to an interconnect, and where the CSoC is configured to transmit a task request over the interconnect to a first networked bulk storage controller (NBSC) requesting that a task be performed on a bulk storage medium.
    Type: Application
    Filed: September 13, 2016
    Publication date: April 13, 2017
    Inventors: Debashis Bhattacharya, Alan Gatherer, Alex Elisa Chandra, Mark Brown, Hao Luan, Ashish Rai Shrivastava
  • Publication number: 20170103093
    Abstract: A method includes receiving, by a real-time Data Warehouse (rDW) from a first task, a first dataset and spreading the first dataset to produce a first plurality of objects, where the first plurality of objects includes a first object and a second object. The method also includes storing the first object in a first location in an rDW data repository and storing the second object in a second location in the rDW data repository.
    Type: Application
    Filed: May 31, 2016
    Publication date: April 13, 2017
    Inventors: Alex Elisa Chandra, Mark Brown, Debashis Bhattacharya, Alan Gatherer
  • Publication number: 20160103707
    Abstract: A method includes receiving, by a system on a chip (SoC) from a logically centralized controller, configuration information and reading, from a semantics aware storage module of the SoC, a data block in accordance with the configuration information. The method also includes performing scheduling to produce a schedule in accordance with the configuration information and writing the data block to an input data queue in accordance with the schedule to produce a stored data block. Additionally, the method includes writing a tag to an input tag queue to produce a stored tag, where the tag corresponds to the data block.
    Type: Application
    Filed: October 7, 2015
    Publication date: April 14, 2016
    Inventors: Debashis Bhattacharya, Alan Gatherer, Ashish Rai Shrivastava, Mark Brown, Zhenguo Gu, Qiang Wang, Alex Elisa Chandra