Patents Assigned to BIGSTREAM SOLUTIONS, INC.
  • Patent number: 11321606
    Abstract: Methods, systems, apparatus, and circuits for dynamically optimizing the circuit for forward and backward propagation phases of training for neural networks, given a fixed resource budget. The circuits comprising: (1) a specialized circuit that can operate on a plurality of multi-dimensional inputs and weights for the forward propagations phase of neural networks; and (2) a specialized circuit that can operate on either gradients and inputs, or gradients and weights for the backward propagation phase of neural networks.
    Type: Grant
    Filed: January 15, 2020
    Date of Patent: May 3, 2022
    Assignee: BIGSTREAM SOLUTIONS, INC.
    Inventors: Hardik Sharma, Jongse Park
  • Patent number: 11194625
    Abstract: For one embodiment of the present invention, methods and systems for accelerating data operations with efficient memory management in native code and native dynamic class loading mechanisms are disclosed. In one embodiment, a data processing system comprises memory and a processing unit coupled to the memory. The processing unit is configured to receive input data, to execute a domain specific language (DSL) for a DSL operation with a native implementation, to translate a user defined function (UDF) into the native implementation by translating user defined managed software code into native software code, to execute the native software code in the native implementation, and to utilize a native memory management mechanism for the memory to manage object instances in the native implementation.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: December 7, 2021
    Assignee: BIGSTREAM SOLUTIONS, INC.
    Inventors: Weiwei Chen, Behnam Robatmili, Maysam Lavasani, John David Davis
  • Publication number: 20210042280
    Abstract: Methods and systems are disclosed for a hardware acceleration pipeline with filtering engine for column-oriented database management systems with arbitrary scheduling functionality. In one example, a hardware accelerator for data stored in columnar storage format comprises memory to store data and a controller coupled to the memory. The controller to process at least a subset of a page of columnar format in an execution unit with any arbitrary scheduling across columns of the columnar storage format.
    Type: Application
    Filed: August 8, 2020
    Publication date: February 11, 2021
    Applicant: BigStream Solutions, Inc.
    Inventors: Hardik Sharma, Michael Brzozowski, Balavinayagam Samynathan
  • Publication number: 20200311264
    Abstract: A data processing system is disclosed that includes an Input/output (I/O) interface to receive incoming data and an in-line accelerator coupled to the I/O interface. The in-line accelerator is configured to receive the incoming data from the I/O interface and to automatically remove all timing channels that potentially form through any shared resources. A generic technique of the present design avoids timing channels between different types of resources. A compiler is enabled to automatically apply this generic pattern to secure shared resources.
    Type: Application
    Filed: June 15, 2020
    Publication date: October 1, 2020
    Applicant: BigStream Solutions, Inc.
    Inventors: Maysam LAVASANI, Balavinayagam Samynathan
  • Publication number: 20200301898
    Abstract: Methods and systems are disclosed for accelerating Big Data operations by utilizing subgraph templates for a hardware accelerator of a computational storage device. In one example, a computer-implemented method comprises performing a query with a dataflow compiler, performing a stage acceleration analyzer function including executing a matching algorithm to determine similarities between sub-graphs of an application program and unique templates from an available library of templates; and selecting at least one template that at least partially matches the sub-graphs with the at least one template being associated with a linear set of operators to be executed sequentially within a stage of the Big Data operations.
    Type: Application
    Filed: June 10, 2020
    Publication date: September 24, 2020
    Applicant: BigStream Solutions, Inc.
    Inventors: Balavinayagam Samynathan, Keith Chapman, Mehdi Nik, Behnam Robatmili, Shahrzad Mirkhani, Maysam Lavasani, John David Davis, Danesh Tavana, Weiwei Chen
  • Publication number: 20200226444
    Abstract: For one embodiment, a hardware accelerator with a heterogenous architecture for training quantized neural networks is described. In one example, a hardware accelerator for training quantized data, comprises memory to store data, a plurality of compute units to perform computations of a data type for an inference phase of training quantized data of a neural network, and plurality of heterogenous precision compute units to perform computations of mixed precision data types for a backward propagation phase of training quantized data of the neural network.
    Type: Application
    Filed: January 15, 2020
    Publication date: July 16, 2020
    Applicant: Bigstream Solutions, Inc.
    Inventors: Hardik Sharma, Jongse Park
  • Publication number: 20200225996
    Abstract: Methods, systems, apparatus, and circuits for dynamically optimizing the circuit for forward and backward propagation phases of training for neural networks, given a fixed resource budget. The circuits comprising: (1) a specialized circuit that can operate on a plurality of multi-dimensional inputs and weights for the forward propagations phase of neural networks; and (2) a specialized circuit that can operate on either gradients and inputs, or gradients and weights for the backward propagation phase of neural networks.
    Type: Application
    Filed: January 15, 2020
    Publication date: July 16, 2020
    Applicant: Bigstream Solutions, Inc.
    Inventors: Hardik Sharma, Jongse Park
  • Publication number: 20200226473
    Abstract: For one embodiment, a hardware accelerator with a heterogeneous-precision architecture for training quantized neural networks is described. In one example, a hardware accelerator for training quantized neural networks comprises a multilevel memory to store data and a software controllable mixed precision array coupled to the memory. The mixed precision array includes an input buffer, detect logic to detect zero value operands, and a plurality of heterogenous precision compute units to perform computations of mixed precision data types for the forward and backward propagation phase of training quantized neural networks.
    Type: Application
    Filed: January 15, 2020
    Publication date: July 16, 2020
    Applicant: Bigstream Solutions, Inc.
    Inventors: Hardik Sharma, Jongse Park
  • Publication number: 20200183749
    Abstract: For one embodiment of the present invention, methods and systems for accelerating data operations with efficient memory management in native code and native dynamic class loading mechanisms are disclosed. In one embodiment, a data processing system comprises memory and a processing unit coupled to the memory. The processing unit is configured to receive input data, to execute a domain specific language (DSL) for a DSL operation with a native implementation, to translate a user defined function (UDF) into the native implementation by translating user defined managed software code into native software code, to execute the native software code in the native implementation, and to utilize a native memory management mechanism for the memory to manage object instances in the native implementation.
    Type: Application
    Filed: December 3, 2019
    Publication date: June 11, 2020
    Applicant: BigStream Solutions, Inc.
    Inventors: Weiwei Chen, Behnam Robatmili, Maysam Lavasani, John David Davis
  • Publication number: 20200081841
    Abstract: Methods and systems are disclosed for a cache architecture for accelerating operations of a column-oriented database management system. In one example, a hardware accelerator for data stored in columnar storage format comprises at least one decoder to generate decoded data, a cache controller coupled to the at least one decoder. The cache controller comprising a store unit to store data in columnar format, cache admission policy hardware for admitting data into the store unit including a next address while a current address is being processed, and a prefetch unit for prefetching data from memory when a cache miss occurs.
    Type: Application
    Filed: September 6, 2019
    Publication date: March 12, 2020
    Applicant: Bigstream Solutions, Inc.
    Inventors: Balavinayagam Samynathan, John David Davis, Peter Robert Matheu, Christopher Ryan Both, Maysam Lavasani
  • Publication number: 20190392002
    Abstract: Methods and systems are disclosed for accelerating big data operations by utilizing subgraph templates. In one example, a data processing system includes a data processing system comprising a hardware processor and a hardware accelerator coupled to the hardware processor. The hardware accelerator is configured with a compiler of an accelerator functionality to generate an execution plan, to generate computations for nodes including subgraphs in a distributed system for an application program based on the execution plan, and to execute a matching algorithm to determine similarities between the subgraphs and unique templates from an available library of templates.
    Type: Application
    Filed: June 25, 2019
    Publication date: December 26, 2019
    Applicant: BigStream Solutions, Inc.
    Inventors: Maysam Lavasani, John David Davis, Danesh Tavana, Weiwei Chen, Balavinayagam Samynathan, Behnam Robatmili
  • Patent number: 10089259
    Abstract: A data processing system is disclosed that includes an in-line accelerator and a processor. The system can be extended to include an in-line accelerator and system of multi-level accelerators and a processor. The in-line accelerator receives the incoming data elements and begins processing. Upon premature termination of the execution, the in-line accelerator transfers the execution to a processor (or the next level accelerator in a multi-level acceleration system). Transfer of execution includes transferring of data and control. The processor (or the next accelerator) either continues the execution of the in-line accelerator from the bailout point or restarts the execution. The necessary data must be transferred to the processor (or the next accelerator) to complete the execution.
    Type: Grant
    Filed: October 16, 2015
    Date of Patent: October 2, 2018
    Assignee: Bigstream Solutions, Inc.
    Inventor: Maysam Lavasani
  • Patent number: 9953003
    Abstract: A data processing system is disclosed that includes machines having an in-line accelerator and a general purpose instruction-based general purpose instruction-based processor. In one example, a machine comprises storage to store data and an Input/output (I/O) processing unit coupled to the storage. The I/O processing unit includes an in-line accelerator that is configured for in-line stream processing of distributed multi stage dataflow based computations. For a first stage of operations, the in-line accelerator is configured to read data from the storage, to perform computations on the data, and to shuffle a result of the computations to generate a first set of shuffled data. The in-line accelerator performs the first stage of operations with buffer less computations.
    Type: Grant
    Filed: July 21, 2016
    Date of Patent: April 24, 2018
    Assignee: BigStream Solutions, Inc.
    Inventor: Maysam Lavasani
  • Publication number: 20180068004
    Abstract: A system is disclosed that includes machines, distributed nodes, event producers, and edge devices for performing big data applications. In one example, a centralized system for big data services comprises storage to store data for big data services and a plurality of servers coupled to the storage. The plurality of servers perform at least one of ingest, transform, and serve stages of data. A sub-system having an auto transfer feature performs program analysis on computations of the data and automatically detects computations to be transferred from within the centralized system to at least one of an event producer and an edge device.
    Type: Application
    Filed: September 8, 2017
    Publication date: March 8, 2018
    Applicant: BigStream Solutions, Inc.
    Inventor: Maysam LAVASANI
  • Publication number: 20180069925
    Abstract: A system is disclosed that includes machines for performing big data applications. In one example, a centralized system for big data services comprising storage to store data for big data services and a plurality of servers coupled to the storage. The plurality of servers perform at least one of ingest, transform, and serve stages of data. A sub-system has an auto transfer feature to perform program analysis on computations of the data and to automatically detect computations to be transferred from within the centralized system to at least one distributed node that includes at least one of messaging systems and data collection systems.
    Type: Application
    Filed: September 8, 2017
    Publication date: March 8, 2018
    Applicant: BigStream Solutions, Inc.
    Inventor: Maysam LAVASANI
  • Publication number: 20170308697
    Abstract: A data processing system is disclosed that includes an Input/output (I/O) interface to receive incoming data and an in-line accelerator coupled to the I/O interface. The in-line accelerator is configured to receive the incoming data from the I/O interface and to automatically remove all timing channels that potentially form through any shared resources. A generic technique of the present design avoids timing channels between different types of resources. A compiler is enabled to automatically apply this generic pattern to secure shared resources.
    Type: Application
    Filed: April 21, 2017
    Publication date: October 26, 2017
    Applicant: BigStream Solutions, Inc.
    Inventor: Maysam LAVASANI
  • Patent number: 9715475
    Abstract: A data processing system is disclosed that includes machines having an in-line accelerator and a general purpose instruction-based general purpose instruction-based processor. In one example, a machine comprises storage to store data and an Input/output (I/O) processing unit coupled to the storage. The I/O processing unit includes an in-line accelerator that is configured for in-line stream processing of distributed multi stage dataflow based computations. For a first stage of operations, the in-line accelerator is configured to read data from the storage, to perform computations on the data, and to shuffle a result of the computations to generate a first set of shuffled data. The in-line accelerator performs the first stage of operations with buffer less computations.
    Type: Grant
    Filed: July 20, 2016
    Date of Patent: July 25, 2017
    Assignee: BIGSTREAM SOLUTIONS, INC.
    Inventor: Maysam Lavasani