Patents by Inventor Ashok Anand

Ashok Anand has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11487668
    Abstract: Described are methods and systems for improved cardinality estimation. A method may include obtaining a data-query, obtaining a row, generating a hash value, determining a cardinality of leading zeros in the hash value, identifying a bucket with respect to the hash value, including a bucket identifier and the cardinality of leading zeros in a representation, determining the approximate unique count, and outputting the approximate unique count as results data responsive to the portion of the data-query.
    Type: Grant
    Filed: April 6, 2021
    Date of Patent: November 1, 2022
    Assignee: ThoughtSpot, Inc.
    Inventors: Ashok Anand, Bhanu Prakash, Tushar Marda
  • Publication number: 20220327127
    Abstract: Injecting override instructions associated with query executions instructions performed on a distributed database includes receiving a data-query; generating, by a first database instance, a query plan that includes a first query execution instruction for transmission to a second database instance; transmitting, by the first database instance, a request for partial results to the second database instance, where the request includes the first query execution instruction and an indication of override instructions corresponding to the first query execution instruction; responsive to a determination that the request includes the indication, including, by the second database instance, the override instructions in a set of high-level language query instructions; obtaining, by the second database instance, a machine language query based on the set; executing, at the second database instance, the machine language query to obtain the partial results; and transmitting, by the second database instance, the partial resul
    Type: Application
    Filed: April 9, 2021
    Publication date: October 13, 2022
    Inventors: Ashok Anand, Bhanu Prakash, Amit Prakash, Sanjay Agrawal
  • Publication number: 20220318147
    Abstract: Described are methods and systems for improved cardinality estimation. A method may include obtaining a data-query, obtaining a row, generating a hash value, determining a cardinality of leading zeros in the hash value, identifying a bucket with respect to the hash value, including a bucket identifier and the cardinality of leading zeros in a representation, determining the approximate unique count, and outputting the approximate unique count as results data responsive to the portion of the data-query.
    Type: Application
    Filed: April 6, 2021
    Publication date: October 6, 2022
    Inventors: Ashok Anand, Bhanu Prakash, Tushar Marda
  • Publication number: 20220309067
    Abstract: Querying a distributed database including a table sharded into shards distributed to database instances includes receiving a data-query that includes an aggregation clause on a first column and a grouping clause on a second column; obtaining and outputting results data. Obtaining the results data includes receiving, by a query coordinator, intermediate results data; and combining, by the query coordinator, the intermediate results to obtain the results data.
    Type: Application
    Filed: March 26, 2021
    Publication date: September 29, 2022
    Inventors: Ashok Anand, Ambareesh Sreekumaran Nair Jayakumari, Prateek Gaur, Donko Donjerkovic
  • Patent number: 11429607
    Abstract: Data-query execution with distributed machine-language query management in a low-latency database analysis system may include obtaining, at a distributed in-memory database, a data-query expressing a request for data in a defined structured query language associated with the distributed in-memory database, automatically generating a high-level language query representing at least a portion of the data-query, obtaining a machine language query corresponding to the high-level language query, executing the machine language query to obtain results data, and outputting the results data. Obtaining the machine language query may include determining whether the machine language query is cached, and in response to a determination that the machine language query is unavailable, sending a request for the machine language query to a distributed machine-language-query management instance.
    Type: Grant
    Filed: September 18, 2020
    Date of Patent: August 30, 2022
    Assignee: ThoughtSpot, Inc.
    Inventors: Ashok Anand, Satyam Shekhar, Prateek Gaur, Amit Prakash
  • Publication number: 20210109912
    Abstract: Systems and methods for multi-layered key-value storage are described. For example, methods may include receiving two or more put requests that each include a respective primary key and a corresponding respective value; storing the two or more put requests in a buffer in a first datastore; determining whether the buffer is storing put requests that collectively exceed a threshold; responsive to the determination that the threshold has been exceeded, transmitting a write request to a second datastore, including a subsidiary key and a corresponding data file that includes the respective values of the two or more put requests at respective offsets in the data file; for the two or more put requests, storing respective entries in an index in the first datastore that associate the respective primary keys with the subsidiary key and the respective offsets; and deleting the two or more put requests from the buffer.
    Type: Application
    Filed: October 9, 2020
    Publication date: April 15, 2021
    Inventors: Samprit Biswas, Satyam Shekhar, Ashok Anand, Bhanu Prakash
  • Publication number: 20210089530
    Abstract: Data-query execution with distributed machine-language query management in a low-latency database analysis system may include obtaining, at a distributed in-memory database, a data-query expressing a request for data in a defined structured query language associated with the distributed in-memory database, automatically generating a high-level language query representing at least a portion of the data-query, obtaining a machine language query corresponding to the high-level language query, executing the machine language query to obtain results data, and outputting the results data. Obtaining the machine language query may include determining whether the machine language query is cached, and in response to a determination that the machine language query is unavailable, sending a request for the machine language query to a distributed machine-language-query management instance.
    Type: Application
    Filed: September 18, 2020
    Publication date: March 25, 2021
    Inventors: Ashok Anand, Satyam Shekhar, Prateek Gaur, Amit Prakash
  • Patent number: 10931731
    Abstract: A method of speeding up the delivery of a dynamic webpage is disclosed. A plurality of responses to a plurality of requests for a webpage is received, the webpage including dynamic components. The plurality of responses is compared to identify common and dynamic components across the plurality of requests. A cached stub is dynamically adapted based on the comparison within a learning window, the cached stub including the identified common components and placeholders for portions of the identified dynamic components. The dynamically adapted cached stub is sent in response to at least some of the plurality of requests for the webpage. A frequency of reloading of the webpage is monitored, wherein a reloading of the webpage is triggered by a detection of the dynamically adapted cached stub having one of a plurality of types of error. The learning window is adjusted based on a frequency of reloading of the webpage.
    Type: Grant
    Filed: August 7, 2019
    Date of Patent: February 23, 2021
    Assignee: Akamai Technologies, Inc.
    Inventors: Hariharan Kolam, Sharad Jaiswal, Mohammad H. Reshadi, Ashok Anand
  • Patent number: 10887419
    Abstract: Processing a purge request is disclosed. In an embodiment, the purge request is received from a node, where the purge request is for a next purge instruction and the node has an associated queue of purge instruction(s) with associated timestamps. In response to receiving the purge request, providing an unprocessed purge instruction having a time stamp before a threshold time. After processing the purge instruction having a timestamp before the threshold time, processing the remaining purge instructions as follows: indicating an availability state of the node as transitional, storing a current time value as a reference time value, processing in chronological order those remaining purge instructions in the queue with a time value chronologically before the reference time value, and indicating an availability state of the node as available.
    Type: Grant
    Filed: December 9, 2016
    Date of Patent: January 5, 2021
    Assignee: Akamai Technologies, Inc.
    Inventors: Ashok Anand, Manjunath Bharadwaj Subramanya
  • Patent number: 10579967
    Abstract: A system and method for creating, executing and managing processes of cross-enterprise businesses using nano server architecture, is disclosed herein. A process store tool (e.g., a graphical interface visual tool) at the end-user (such as, a business entity or an individual process developer) provides an open, flexible workflow engine for supporting the creation and enforcement of at least one business process with respect to the end user. A cluster having at least one nano server (also referred as ‘lean server’) configured within a data centre for storing, executing and managing processes with respect to the end user within the cloud environment. The nano servers of the cluster are the micro app servers with a small memory foot print consuming minimal resources. The nano servers are multi-threaded processes which houses the services that is consumed by the end user.
    Type: Grant
    Filed: November 14, 2014
    Date of Patent: March 3, 2020
    Inventor: P. Ashok Anand
  • Publication number: 20190364090
    Abstract: A method of speeding up the delivery of a dynamic webpage is disclosed. A plurality of responses to a plurality of requests for a webpage is received, the webpage including dynamic components. The plurality of responses is compared to identify common and dynamic components across the plurality of requests. A cached stub is dynamically adapted based on the comparison within a learning window, the cached stub including the identified common components and placeholders for portions of the identified dynamic components. The dynamically adapted cached stub is sent in response to at least some of the plurality of requests for the webpage. A frequency of reloading of the webpage is monitored, wherein a reloading of the webpage is triggered by a detection of the dynamically adapted cached stub having one of a plurality of types of error. The learning window is adjusted based on a frequency of reloading of the webpage.
    Type: Application
    Filed: August 7, 2019
    Publication date: November 28, 2019
    Inventors: Hariharan Kolam, Sharad Jaiswal, Mohammad H. Reshadi, Ashok Anand
  • Patent number: 10425464
    Abstract: A method of speeding up the delivery of a dynamic webpage is disclosed. A plurality of responses to a plurality of requests for a webpage is received, the webpage including dynamic components. The plurality of responses is compared to identify common and dynamic components across the plurality of requests. A cached stub is dynamically adapted based on the comparison within a learning window, the cached stub including the identified common components and placeholders for portions of the identified dynamic components. The dynamically adapted cached stub is sent in response to at least some of the plurality of requests for the webpage. A frequency of reloading of the webpage is monitored, wherein a reloading of the webpage is triggered by a detection of the dynamically adapted cached stub having one of a plurality of types of error. The learning window is adjusted based on a frequency of reloading of the webpage.
    Type: Grant
    Filed: December 23, 2015
    Date of Patent: September 24, 2019
    Assignee: Instart Logic, Inc.
    Inventors: Hariharan Kolam, Sharad Jaiswal, Mohammad H. Reshadi, Ashok Anand
  • Patent number: 10313473
    Abstract: A system for processing a purge request is disclosed. The purge request is received. An availability state for each content distribution node in a group of content distribution nodes is stored. Based on the purge request, one or more purge instructions are generated for one or more available state content distribution nodes of the group. Based on the purge request, one or more delayed purge instructions are queued for one or more unavailable state content distribution nodes of the group. It is determined that the one or more available state content distribution nodes of the group have completed processing the one or more purge instructions generated for the one or more available state content distribution nodes. Based at least in part on the queuing of the one or more delayed purge instructions for the one or more unavailable state nodes, it is confirmed that the purge request has been completed.
    Type: Grant
    Filed: February 27, 2015
    Date of Patent: June 4, 2019
    Assignee: Instart Logic, Inc.
    Inventors: Ashok Anand, Manjunath Bharadwaj Subramanya
  • Patent number: 10142257
    Abstract: Systems and methods for dynamic scaling of RE middleboxes in a communication network are described. According to the present subject matter, the method comprises determining a load of incoming data at an encoding middlebox in the communication network. Further, the method comprises modifying a number of encoder instances in the encoding middlebox and a number of decoder instances in a decoding middlebox based on the load of incoming data.
    Type: Grant
    Filed: March 27, 2014
    Date of Patent: November 27, 2018
    Assignee: Alcatel Lucent
    Inventors: Mansoor Alicherry, Ashok Anand, Shoban Preeth Chandrabose
  • Patent number: 10084863
    Abstract: An electronic switching system for generating correlation identify (ID) with respect to a client in order to thereby establish, integrate and communicate to a server (lean server or nano server) within a cloud environment (e.g. Inswit™ Cloud). A service location identification module for identifying and generating a service location identity with respect to a remote client. A source ID generating module for generating a correlation ID/source ID based on the service location identify in order to serialize the payload and establish a connection with the server. The electronic switching system proposed herein operates external to the cloud environment by effectively generating the correlation identity with respect to a client device accessing the server in a cloud environment. The system also switches, integrates and executes client communications to an appropriate server in the cloud environment using the correlation ID.
    Type: Grant
    Filed: February 25, 2014
    Date of Patent: September 25, 2018
    Inventor: P. Ashok Anand
  • Patent number: 9934059
    Abstract: Methods and systems for flow migration between virtual network appliances in a cloud computing network are described. A network appliances managing architecture for migrating flow between VNAs including a controller to receive performance data for a VNA and analyze the performance data to determine whether the VNA has a weak performance status, where the weak performance status corresponds to any one of an overloaded, an under-loaded, and a failed status. The network appliances managing architecture further includes a classifier to receive a flow migration request from the controller for migrating one or more flows of data packets from the VNA based on the analyzing. The classifier further identifies an active VNA for flow migration based on a mapping policy and migrates the one or more flows from the VNA to the at least one active VNA.
    Type: Grant
    Filed: March 27, 2014
    Date of Patent: April 3, 2018
    Assignee: WSOU Investments, LLC
    Inventors: Mansoor Alicherry, Ashok Anand, Shoban Preeth Chandrabose
  • Patent number: 9912757
    Abstract: This invention relates to a method for generating correlation identity with respect to a client to establish, integrate and communicate to a server within a cloud environment (e.g. Inswit™ Cloud). A service location identity can be generated with respect to a remote client by getting at least one service node of an appropriate service request made by the client device within the cluster of the cloud environment. A correlation ID/source ID can be thereafter generated based on the service location identity to serialize the payload and establish a connection with the server. The integration services with respect to the client device can be instantiated to permit authenticated information flow within the cloud network. The messages including the information on the destination end points can be finally emanated out of the source end points to the destination end point by efficiently authenticating the client devices using the correlation ID.
    Type: Grant
    Filed: February 25, 2014
    Date of Patent: March 6, 2018
    Inventor: P. Ashok Anand
  • Publication number: 20170221000
    Abstract: A system and method for creating, executing and managing processes of cross-enterprise businesses using nano server architecture, is disclosed herein. A process store tool (e.g., a graphical interface visual tool) at the end-user (such as, a business entity or an individual process developer) provides an open, flexible workflow engine for supporting the creation and enforcement of at least one business process with respect to the end user. A cluster having at least one nano server (also referred as ‘lean server’) configured within a data centre for storing, executing and managing processes with respect to the end user within the cloud environment. The nano servers of the cluster are the micro app servers with a small memory foot print consuming minimal resources. The nano servers are multi-threaded processes which houses the services that is consumed by the end user.
    Type: Application
    Filed: November 14, 2014
    Publication date: August 3, 2017
    Inventor: P. Ashok Anand
  • Patent number: 9612955
    Abstract: Aspects of the present invention provide high-performance indexing for data-intensive systems in which “slicing” is used to organize indexing data on an SSD such that related entries are located together. Slicing enables combining multiple reads into a single “slice read” of related items, offering high read performance. Small in-memory indexes, such as hash tables, bloom filters or LSH tables, may be used as buffers for insert operations to resolve slow random writes on the SSD. When full, these buffers are written to the SSD. The internal architecture of the SSD may also be leveraged to achieve higher performance via parallelism. Such parallelism may occur at the channel-level, the package-level, the die-level and/or the plane-level. Consequently, memory and compute resources are freed for use by higher layer applications, and better performance may be achieved.
    Type: Grant
    Filed: January 9, 2013
    Date of Patent: April 4, 2017
    Assignee: Wisconsin Alumni Research Foundation
    Inventors: Srinivasa Akella, Ashok Anand, Aaron Gember
  • Publication number: 20170094012
    Abstract: Processing a purge request is disclosed. The purge request is received. Based on the purge request, a purge instruction is generated for each content distribution node of a group of one or more content distribution nodes. Each content distribution node of the group is verified has either completed processing the purge instruction or is determined to be unavailable. Despite at least one content distribution node of the group determined to be unavailable having not completed processing the purge instruction, an indication is authorized that the purge request has been completed.
    Type: Application
    Filed: December 9, 2016
    Publication date: March 30, 2017
    Inventors: Ashok Anand, Manjunath Bharadwaj Subramanya