Patents by Inventor Maysam Lavasani
Maysam Lavasani has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220283826Abstract: For one embodiment of the present disclosure, an artificial intelligence (AI) system and method are disclosed herein for automatically generating browser actions using graph neural networks. A computer implemented method includes receiving, with an artificial intelligence (AI) agent, an input including a high-level natural language request or task or a text request or task, and in response to the input, automatically obtaining, with the AI agent, an html graph for a web application that is associated with the input. The method further includes automatically obtaining an appropriate domain specific semantic graph (DSG) in response to obtaining the html graph for the web application and based on a known set of DSGs and automatically generating, with a graph neural network (GNN), a labeled html graph in response to providing the html graph and the appropriate DSG to the GNN.Type: ApplicationFiled: March 8, 2022Publication date: September 8, 2022Inventor: Maysam Lavasani
-
Patent number: 11194625Abstract: For one embodiment of the present invention, methods and systems for accelerating data operations with efficient memory management in native code and native dynamic class loading mechanisms are disclosed. In one embodiment, a data processing system comprises memory and a processing unit coupled to the memory. The processing unit is configured to receive input data, to execute a domain specific language (DSL) for a DSL operation with a native implementation, to translate a user defined function (UDF) into the native implementation by translating user defined managed software code into native software code, to execute the native software code in the native implementation, and to utilize a native memory management mechanism for the memory to manage object instances in the native implementation.Type: GrantFiled: December 3, 2019Date of Patent: December 7, 2021Assignee: BIGSTREAM SOLUTIONS, INC.Inventors: Weiwei Chen, Behnam Robatmili, Maysam Lavasani, John David Davis
-
Publication number: 20200311264Abstract: A data processing system is disclosed that includes an Input/output (I/O) interface to receive incoming data and an in-line accelerator coupled to the I/O interface. The in-line accelerator is configured to receive the incoming data from the I/O interface and to automatically remove all timing channels that potentially form through any shared resources. A generic technique of the present design avoids timing channels between different types of resources. A compiler is enabled to automatically apply this generic pattern to secure shared resources.Type: ApplicationFiled: June 15, 2020Publication date: October 1, 2020Applicant: BigStream Solutions, Inc.Inventors: Maysam LAVASANI, Balavinayagam Samynathan
-
Publication number: 20200301898Abstract: Methods and systems are disclosed for accelerating Big Data operations by utilizing subgraph templates for a hardware accelerator of a computational storage device. In one example, a computer-implemented method comprises performing a query with a dataflow compiler, performing a stage acceleration analyzer function including executing a matching algorithm to determine similarities between sub-graphs of an application program and unique templates from an available library of templates; and selecting at least one template that at least partially matches the sub-graphs with the at least one template being associated with a linear set of operators to be executed sequentially within a stage of the Big Data operations.Type: ApplicationFiled: June 10, 2020Publication date: September 24, 2020Applicant: BigStream Solutions, Inc.Inventors: Balavinayagam Samynathan, Keith Chapman, Mehdi Nik, Behnam Robatmili, Shahrzad Mirkhani, Maysam Lavasani, John David Davis, Danesh Tavana, Weiwei Chen
-
Patent number: 10691797Abstract: A data processing system is disclosed that includes an Input/output (I/O) interface to receive incoming data and an in-line accelerator coupled to the I/O interface. The in-line accelerator is configured to receive the incoming data from the I/O interface and to automatically remove all timing channels that potentially form through any shared resources. A generic technique of the present design avoids timing channels between different types of resources. A compiler is enabled to automatically apply this generic pattern to secure shared resources.Type: GrantFiled: April 21, 2017Date of Patent: June 23, 2020Assignee: Big Stream Solutions, Inc.Inventors: Maysam Lavasani, Balavinayagam Samynathan
-
Publication number: 20200183749Abstract: For one embodiment of the present invention, methods and systems for accelerating data operations with efficient memory management in native code and native dynamic class loading mechanisms are disclosed. In one embodiment, a data processing system comprises memory and a processing unit coupled to the memory. The processing unit is configured to receive input data, to execute a domain specific language (DSL) for a DSL operation with a native implementation, to translate a user defined function (UDF) into the native implementation by translating user defined managed software code into native software code, to execute the native software code in the native implementation, and to utilize a native memory management mechanism for the memory to manage object instances in the native implementation.Type: ApplicationFiled: December 3, 2019Publication date: June 11, 2020Applicant: BigStream Solutions, Inc.Inventors: Weiwei Chen, Behnam Robatmili, Maysam Lavasani, John David Davis
-
Publication number: 20200081841Abstract: Methods and systems are disclosed for a cache architecture for accelerating operations of a column-oriented database management system. In one example, a hardware accelerator for data stored in columnar storage format comprises at least one decoder to generate decoded data, a cache controller coupled to the at least one decoder. The cache controller comprising a store unit to store data in columnar format, cache admission policy hardware for admitting data into the store unit including a next address while a current address is being processed, and a prefetch unit for prefetching data from memory when a cache miss occurs.Type: ApplicationFiled: September 6, 2019Publication date: March 12, 2020Applicant: Bigstream Solutions, Inc.Inventors: Balavinayagam Samynathan, John David Davis, Peter Robert Matheu, Christopher Ryan Both, Maysam Lavasani
-
Publication number: 20190392002Abstract: Methods and systems are disclosed for accelerating big data operations by utilizing subgraph templates. In one example, a data processing system includes a data processing system comprising a hardware processor and a hardware accelerator coupled to the hardware processor. The hardware accelerator is configured with a compiler of an accelerator functionality to generate an execution plan, to generate computations for nodes including subgraphs in a distributed system for an application program based on the execution plan, and to execute a matching algorithm to determine similarities between the subgraphs and unique templates from an available library of templates.Type: ApplicationFiled: June 25, 2019Publication date: December 26, 2019Applicant: BigStream Solutions, Inc.Inventors: Maysam Lavasani, John David Davis, Danesh Tavana, Weiwei Chen, Balavinayagam Samynathan, Behnam Robatmili
-
Patent number: 10089259Abstract: A data processing system is disclosed that includes an in-line accelerator and a processor. The system can be extended to include an in-line accelerator and system of multi-level accelerators and a processor. The in-line accelerator receives the incoming data elements and begins processing. Upon premature termination of the execution, the in-line accelerator transfers the execution to a processor (or the next level accelerator in a multi-level acceleration system). Transfer of execution includes transferring of data and control. The processor (or the next accelerator) either continues the execution of the in-line accelerator from the bailout point or restarts the execution. The necessary data must be transferred to the processor (or the next accelerator) to complete the execution.Type: GrantFiled: October 16, 2015Date of Patent: October 2, 2018Assignee: Bigstream Solutions, Inc.Inventor: Maysam Lavasani
-
Patent number: 9953003Abstract: A data processing system is disclosed that includes machines having an in-line accelerator and a general purpose instruction-based general purpose instruction-based processor. In one example, a machine comprises storage to store data and an Input/output (I/O) processing unit coupled to the storage. The I/O processing unit includes an in-line accelerator that is configured for in-line stream processing of distributed multi stage dataflow based computations. For a first stage of operations, the in-line accelerator is configured to read data from the storage, to perform computations on the data, and to shuffle a result of the computations to generate a first set of shuffled data. The in-line accelerator performs the first stage of operations with buffer less computations.Type: GrantFiled: July 21, 2016Date of Patent: April 24, 2018Assignee: BigStream Solutions, Inc.Inventor: Maysam Lavasani
-
Publication number: 20180069925Abstract: A system is disclosed that includes machines for performing big data applications. In one example, a centralized system for big data services comprising storage to store data for big data services and a plurality of servers coupled to the storage. The plurality of servers perform at least one of ingest, transform, and serve stages of data. A sub-system has an auto transfer feature to perform program analysis on computations of the data and to automatically detect computations to be transferred from within the centralized system to at least one distributed node that includes at least one of messaging systems and data collection systems.Type: ApplicationFiled: September 8, 2017Publication date: March 8, 2018Applicant: BigStream Solutions, Inc.Inventor: Maysam LAVASANI
-
Publication number: 20180068004Abstract: A system is disclosed that includes machines, distributed nodes, event producers, and edge devices for performing big data applications. In one example, a centralized system for big data services comprises storage to store data for big data services and a plurality of servers coupled to the storage. The plurality of servers perform at least one of ingest, transform, and serve stages of data. A sub-system having an auto transfer feature performs program analysis on computations of the data and automatically detects computations to be transferred from within the centralized system to at least one of an event producer and an edge device.Type: ApplicationFiled: September 8, 2017Publication date: March 8, 2018Applicant: BigStream Solutions, Inc.Inventor: Maysam LAVASANI
-
Publication number: 20170308697Abstract: A data processing system is disclosed that includes an Input/output (I/O) interface to receive incoming data and an in-line accelerator coupled to the I/O interface. The in-line accelerator is configured to receive the incoming data from the I/O interface and to automatically remove all timing channels that potentially form through any shared resources. A generic technique of the present design avoids timing channels between different types of resources. A compiler is enabled to automatically apply this generic pattern to secure shared resources.Type: ApplicationFiled: April 21, 2017Publication date: October 26, 2017Applicant: BigStream Solutions, Inc.Inventor: Maysam LAVASANI
-
Patent number: 9715475Abstract: A data processing system is disclosed that includes machines having an in-line accelerator and a general purpose instruction-based general purpose instruction-based processor. In one example, a machine comprises storage to store data and an Input/output (I/O) processing unit coupled to the storage. The I/O processing unit includes an in-line accelerator that is configured for in-line stream processing of distributed multi stage dataflow based computations. For a first stage of operations, the in-line accelerator is configured to read data from the storage, to perform computations on the data, and to shuffle a result of the computations to generate a first set of shuffled data. The in-line accelerator performs the first stage of operations with buffer less computations.Type: GrantFiled: July 20, 2016Date of Patent: July 25, 2017Assignee: BIGSTREAM SOLUTIONS, INC.Inventor: Maysam Lavasani
-
Publication number: 20170024338Abstract: A data processing system is disclosed that includes an in-line accelerator and a processor. The system can be extended to include an in-line accelerator and system of multi-level accelerators and a processor. The in-line accelerator receives the incoming data elements and begins processing. Upon premature termination of the execution, the in-line accelerator transfers the execution to a processor (or the next level accelerator in a multi-level acceleration system). Transfer of execution includes transferring of data and control. The processor (or the next accelerator) either continues the execution of the in-line accelerator from the bailout point or restarts the execution. The necessary data must be transferred to the processor (or the next accelerator) to complete the execution.Type: ApplicationFiled: October 16, 2015Publication date: January 26, 2017Inventor: Maysam Lavasani
-
Publication number: 20170024352Abstract: A data processing system is disclosed that includes machines having an in-line accelerator and a general purpose instruction-based general purpose instruction-based processor. In one example, a machine comprises storage to store data and an Input/output (I/O) processing unit coupled to the storage. The I/O processing unit includes an in-line accelerator that is configured for in-line stream processing of distributed multi stage dataflow based computations. For a first stage of operations, the in-line accelerator is configured to read data from the storage, to perform computations on the data, and to shuffle a result of the computations to generate a first set of shuffled data. The in-line accelerator performs the first stage of operations with buffer less computations.Type: ApplicationFiled: July 20, 2016Publication date: January 26, 2017Inventor: Maysam Lavasani
-
Publication number: 20170024167Abstract: A data processing system is disclosed that includes machines having an in-line accelerator and a general purpose instruction-based general purpose instruction-based processor. In one example, a machine comprises storage to store data and an Input/output (I/O) processing unit coupled to the storage. The I/O processing unit includes an in-line accelerator that is configured for in-line stream processing of distributed multi stage dataflow based computations. For a first stage of operations, the in-line accelerator is configured to read data from the storage, to perform computations on the data, and to shuffle a result of the computations to generate a first set of shuffled data. The in-line accelerator performs the first stage of operations with buffer less computations.Type: ApplicationFiled: July 21, 2016Publication date: January 26, 2017Inventor: Maysam Lavasani