Patents by Inventor Maysam Lavasani

Maysam Lavasani has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220283826
    Abstract: For one embodiment of the present disclosure, an artificial intelligence (AI) system and method are disclosed herein for automatically generating browser actions using graph neural networks. A computer implemented method includes receiving, with an artificial intelligence (AI) agent, an input including a high-level natural language request or task or a text request or task, and in response to the input, automatically obtaining, with the AI agent, an html graph for a web application that is associated with the input. The method further includes automatically obtaining an appropriate domain specific semantic graph (DSG) in response to obtaining the html graph for the web application and based on a known set of DSGs and automatically generating, with a graph neural network (GNN), a labeled html graph in response to providing the html graph and the appropriate DSG to the GNN.
    Type: Application
    Filed: March 8, 2022
    Publication date: September 8, 2022
    Inventor: Maysam Lavasani
  • Patent number: 11194625
    Abstract: For one embodiment of the present invention, methods and systems for accelerating data operations with efficient memory management in native code and native dynamic class loading mechanisms are disclosed. In one embodiment, a data processing system comprises memory and a processing unit coupled to the memory. The processing unit is configured to receive input data, to execute a domain specific language (DSL) for a DSL operation with a native implementation, to translate a user defined function (UDF) into the native implementation by translating user defined managed software code into native software code, to execute the native software code in the native implementation, and to utilize a native memory management mechanism for the memory to manage object instances in the native implementation.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: December 7, 2021
    Assignee: BIGSTREAM SOLUTIONS, INC.
    Inventors: Weiwei Chen, Behnam Robatmili, Maysam Lavasani, John David Davis
  • Publication number: 20200311264
    Abstract: A data processing system is disclosed that includes an Input/output (I/O) interface to receive incoming data and an in-line accelerator coupled to the I/O interface. The in-line accelerator is configured to receive the incoming data from the I/O interface and to automatically remove all timing channels that potentially form through any shared resources. A generic technique of the present design avoids timing channels between different types of resources. A compiler is enabled to automatically apply this generic pattern to secure shared resources.
    Type: Application
    Filed: June 15, 2020
    Publication date: October 1, 2020
    Applicant: BigStream Solutions, Inc.
    Inventors: Maysam LAVASANI, Balavinayagam Samynathan
  • Publication number: 20200301898
    Abstract: Methods and systems are disclosed for accelerating Big Data operations by utilizing subgraph templates for a hardware accelerator of a computational storage device. In one example, a computer-implemented method comprises performing a query with a dataflow compiler, performing a stage acceleration analyzer function including executing a matching algorithm to determine similarities between sub-graphs of an application program and unique templates from an available library of templates; and selecting at least one template that at least partially matches the sub-graphs with the at least one template being associated with a linear set of operators to be executed sequentially within a stage of the Big Data operations.
    Type: Application
    Filed: June 10, 2020
    Publication date: September 24, 2020
    Applicant: BigStream Solutions, Inc.
    Inventors: Balavinayagam Samynathan, Keith Chapman, Mehdi Nik, Behnam Robatmili, Shahrzad Mirkhani, Maysam Lavasani, John David Davis, Danesh Tavana, Weiwei Chen
  • Patent number: 10691797
    Abstract: A data processing system is disclosed that includes an Input/output (I/O) interface to receive incoming data and an in-line accelerator coupled to the I/O interface. The in-line accelerator is configured to receive the incoming data from the I/O interface and to automatically remove all timing channels that potentially form through any shared resources. A generic technique of the present design avoids timing channels between different types of resources. A compiler is enabled to automatically apply this generic pattern to secure shared resources.
    Type: Grant
    Filed: April 21, 2017
    Date of Patent: June 23, 2020
    Assignee: Big Stream Solutions, Inc.
    Inventors: Maysam Lavasani, Balavinayagam Samynathan
  • Publication number: 20200183749
    Abstract: For one embodiment of the present invention, methods and systems for accelerating data operations with efficient memory management in native code and native dynamic class loading mechanisms are disclosed. In one embodiment, a data processing system comprises memory and a processing unit coupled to the memory. The processing unit is configured to receive input data, to execute a domain specific language (DSL) for a DSL operation with a native implementation, to translate a user defined function (UDF) into the native implementation by translating user defined managed software code into native software code, to execute the native software code in the native implementation, and to utilize a native memory management mechanism for the memory to manage object instances in the native implementation.
    Type: Application
    Filed: December 3, 2019
    Publication date: June 11, 2020
    Applicant: BigStream Solutions, Inc.
    Inventors: Weiwei Chen, Behnam Robatmili, Maysam Lavasani, John David Davis
  • Publication number: 20200081841
    Abstract: Methods and systems are disclosed for a cache architecture for accelerating operations of a column-oriented database management system. In one example, a hardware accelerator for data stored in columnar storage format comprises at least one decoder to generate decoded data, a cache controller coupled to the at least one decoder. The cache controller comprising a store unit to store data in columnar format, cache admission policy hardware for admitting data into the store unit including a next address while a current address is being processed, and a prefetch unit for prefetching data from memory when a cache miss occurs.
    Type: Application
    Filed: September 6, 2019
    Publication date: March 12, 2020
    Applicant: Bigstream Solutions, Inc.
    Inventors: Balavinayagam Samynathan, John David Davis, Peter Robert Matheu, Christopher Ryan Both, Maysam Lavasani
  • Publication number: 20190392002
    Abstract: Methods and systems are disclosed for accelerating big data operations by utilizing subgraph templates. In one example, a data processing system includes a data processing system comprising a hardware processor and a hardware accelerator coupled to the hardware processor. The hardware accelerator is configured with a compiler of an accelerator functionality to generate an execution plan, to generate computations for nodes including subgraphs in a distributed system for an application program based on the execution plan, and to execute a matching algorithm to determine similarities between the subgraphs and unique templates from an available library of templates.
    Type: Application
    Filed: June 25, 2019
    Publication date: December 26, 2019
    Applicant: BigStream Solutions, Inc.
    Inventors: Maysam Lavasani, John David Davis, Danesh Tavana, Weiwei Chen, Balavinayagam Samynathan, Behnam Robatmili
  • Patent number: 10089259
    Abstract: A data processing system is disclosed that includes an in-line accelerator and a processor. The system can be extended to include an in-line accelerator and system of multi-level accelerators and a processor. The in-line accelerator receives the incoming data elements and begins processing. Upon premature termination of the execution, the in-line accelerator transfers the execution to a processor (or the next level accelerator in a multi-level acceleration system). Transfer of execution includes transferring of data and control. The processor (or the next accelerator) either continues the execution of the in-line accelerator from the bailout point or restarts the execution. The necessary data must be transferred to the processor (or the next accelerator) to complete the execution.
    Type: Grant
    Filed: October 16, 2015
    Date of Patent: October 2, 2018
    Assignee: Bigstream Solutions, Inc.
    Inventor: Maysam Lavasani
  • Patent number: 9953003
    Abstract: A data processing system is disclosed that includes machines having an in-line accelerator and a general purpose instruction-based general purpose instruction-based processor. In one example, a machine comprises storage to store data and an Input/output (I/O) processing unit coupled to the storage. The I/O processing unit includes an in-line accelerator that is configured for in-line stream processing of distributed multi stage dataflow based computations. For a first stage of operations, the in-line accelerator is configured to read data from the storage, to perform computations on the data, and to shuffle a result of the computations to generate a first set of shuffled data. The in-line accelerator performs the first stage of operations with buffer less computations.
    Type: Grant
    Filed: July 21, 2016
    Date of Patent: April 24, 2018
    Assignee: BigStream Solutions, Inc.
    Inventor: Maysam Lavasani
  • Publication number: 20180069925
    Abstract: A system is disclosed that includes machines for performing big data applications. In one example, a centralized system for big data services comprising storage to store data for big data services and a plurality of servers coupled to the storage. The plurality of servers perform at least one of ingest, transform, and serve stages of data. A sub-system has an auto transfer feature to perform program analysis on computations of the data and to automatically detect computations to be transferred from within the centralized system to at least one distributed node that includes at least one of messaging systems and data collection systems.
    Type: Application
    Filed: September 8, 2017
    Publication date: March 8, 2018
    Applicant: BigStream Solutions, Inc.
    Inventor: Maysam LAVASANI
  • Publication number: 20180068004
    Abstract: A system is disclosed that includes machines, distributed nodes, event producers, and edge devices for performing big data applications. In one example, a centralized system for big data services comprises storage to store data for big data services and a plurality of servers coupled to the storage. The plurality of servers perform at least one of ingest, transform, and serve stages of data. A sub-system having an auto transfer feature performs program analysis on computations of the data and automatically detects computations to be transferred from within the centralized system to at least one of an event producer and an edge device.
    Type: Application
    Filed: September 8, 2017
    Publication date: March 8, 2018
    Applicant: BigStream Solutions, Inc.
    Inventor: Maysam LAVASANI
  • Publication number: 20170308697
    Abstract: A data processing system is disclosed that includes an Input/output (I/O) interface to receive incoming data and an in-line accelerator coupled to the I/O interface. The in-line accelerator is configured to receive the incoming data from the I/O interface and to automatically remove all timing channels that potentially form through any shared resources. A generic technique of the present design avoids timing channels between different types of resources. A compiler is enabled to automatically apply this generic pattern to secure shared resources.
    Type: Application
    Filed: April 21, 2017
    Publication date: October 26, 2017
    Applicant: BigStream Solutions, Inc.
    Inventor: Maysam LAVASANI
  • Patent number: 9715475
    Abstract: A data processing system is disclosed that includes machines having an in-line accelerator and a general purpose instruction-based general purpose instruction-based processor. In one example, a machine comprises storage to store data and an Input/output (I/O) processing unit coupled to the storage. The I/O processing unit includes an in-line accelerator that is configured for in-line stream processing of distributed multi stage dataflow based computations. For a first stage of operations, the in-line accelerator is configured to read data from the storage, to perform computations on the data, and to shuffle a result of the computations to generate a first set of shuffled data. The in-line accelerator performs the first stage of operations with buffer less computations.
    Type: Grant
    Filed: July 20, 2016
    Date of Patent: July 25, 2017
    Assignee: BIGSTREAM SOLUTIONS, INC.
    Inventor: Maysam Lavasani
  • Publication number: 20170024338
    Abstract: A data processing system is disclosed that includes an in-line accelerator and a processor. The system can be extended to include an in-line accelerator and system of multi-level accelerators and a processor. The in-line accelerator receives the incoming data elements and begins processing. Upon premature termination of the execution, the in-line accelerator transfers the execution to a processor (or the next level accelerator in a multi-level acceleration system). Transfer of execution includes transferring of data and control. The processor (or the next accelerator) either continues the execution of the in-line accelerator from the bailout point or restarts the execution. The necessary data must be transferred to the processor (or the next accelerator) to complete the execution.
    Type: Application
    Filed: October 16, 2015
    Publication date: January 26, 2017
    Inventor: Maysam Lavasani
  • Publication number: 20170024352
    Abstract: A data processing system is disclosed that includes machines having an in-line accelerator and a general purpose instruction-based general purpose instruction-based processor. In one example, a machine comprises storage to store data and an Input/output (I/O) processing unit coupled to the storage. The I/O processing unit includes an in-line accelerator that is configured for in-line stream processing of distributed multi stage dataflow based computations. For a first stage of operations, the in-line accelerator is configured to read data from the storage, to perform computations on the data, and to shuffle a result of the computations to generate a first set of shuffled data. The in-line accelerator performs the first stage of operations with buffer less computations.
    Type: Application
    Filed: July 20, 2016
    Publication date: January 26, 2017
    Inventor: Maysam Lavasani
  • Publication number: 20170024167
    Abstract: A data processing system is disclosed that includes machines having an in-line accelerator and a general purpose instruction-based general purpose instruction-based processor. In one example, a machine comprises storage to store data and an Input/output (I/O) processing unit coupled to the storage. The I/O processing unit includes an in-line accelerator that is configured for in-line stream processing of distributed multi stage dataflow based computations. For a first stage of operations, the in-line accelerator is configured to read data from the storage, to perform computations on the data, and to shuffle a result of the computations to generate a first set of shuffled data. The in-line accelerator performs the first stage of operations with buffer less computations.
    Type: Application
    Filed: July 21, 2016
    Publication date: January 26, 2017
    Inventor: Maysam Lavasani