Patents by Inventor Nipun Agarwal

Nipun Agarwal has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10469822
    Abstract: Techniques described herein provide methods and systems for scalable distribution of computer vision workloads. In an embodiment, a method comprises receiving, at each of a first node and a second node of a distributed system of nodes, two images. The first image comprises a first set of pixels and the second image comprising a second set of pixels. The method further comprises shifting, at the first node, each pixel of the first set of pixels of the first image in a uniform direction by a first number of pixels to form a first shifted image and shifting, at the second node, each pixel of the first set of pixels of the first image in the uniform direction by a second number of pixels to form a second shifted image. The second number of pixels is different from the first number of pixels.
    Type: Grant
    Filed: March 28, 2017
    Date of Patent: November 5, 2019
    Assignee: Oracle International Corporation
    Inventors: Venkatanathan Varadarajan, Arun Raghavan, Sam Idicula, Nipun Agarwal
  • Patent number: 10452744
    Abstract: Techniques related to memory management for sparse matrix multiplication are disclosed. Computing device(s) may perform a method for multiplying a row of a first sparse matrix with a second sparse matrix to generate a product matrix row. A compressed representation of the second sparse matrix is stored in main memory. The compressed representation comprises a values array that stores non-zero value(s). Tile(s) corresponding to row(s) of second sparse matrix are loaded into scratchpad memory. The tile(s) comprise set(s) of non-zero value(s) of the values array. A particular partition of an uncompressed representation of the product matrix row is generated in the scratchpad memory. The particular partition corresponds to a partition of the second sparse matrix comprising non-zero value(s) included in the tile(s). When a particular tile is determined to comprise non-zero value(s) that are required to generate the particular partition, the particular tile is loaded into the scratchpad memory.
    Type: Grant
    Filed: March 27, 2017
    Date of Patent: October 22, 2019
    Assignee: Oracle International Corporation
    Inventors: Sandeep R. Agrawal, Sam Idicula, Nipun Agarwal
  • Publication number: 20190303482
    Abstract: Techniques are described for building and probing a hash table where the size of an input partition is larger than the cache size of a receiving processor. A processor receives a payload array and generates a hash table in cache that includes a hash bucket array. Each hash bucket element contains an identifier that defines a location of a build key array element in the payload array. For a particular build key array element, the processor determines a hash bucket element that corresponds to the payload array. The processor copies the identifier for particular build key array element into the hash bucket element. If the cache is unable to insert additional build key array elements into the hash table in the cache, then the processor generates a second hash table for the remaining build key array elements in local volatile memory.
    Type: Application
    Filed: April 3, 2018
    Publication date: October 3, 2019
    Inventors: Cagri Balkesen, Nitin Kunal, Nipun Agarwal
  • Patent number: 10417128
    Abstract: Techniques are described for memory coherence in a multi-core system with a heterogeneous memory architecture comprising one or more hardware-managed caches and one or more software-managed caches. According to one embodiment, a set of one or more buffers are allocated in memory, and each respective buffer is associated with a respective metadata tag. The metadata tag may be used to store metadata that identifies a state associated with the respective buffer. The multi-core system may enforce coherence for the one or more hardware-managed caches and the one or more software-managed caches based on the metadata stored in the metadata tag for each respective buffer in the set of one or more buffers. The multi-core system may read the metadata to determine whether a particular buffer is in a hardware-managed or a software-managed cacheable state. Based on the current state of the particular buffer, the multi-core system may perform coherence operations.
    Type: Grant
    Filed: May 6, 2015
    Date of Patent: September 17, 2019
    Assignee: Oracle International Corporation
    Inventors: Andrea Di Blas, Aarti Basant, Arun Raghavan, Nipun Agarwal
  • Publication number: 20190266635
    Abstract: The present disclosure relates to methods, systems, and apparatuses for providing promotion recommendations using a promotion and marketing service. Some aspects may provide a method for providing a promotion recommendation framework. The method includes receiving, via a network interface, a promotion recommendation inquiry from a component of a promotion and marketing service, the promotion recommendation inquiry including electronic identification data identifying at least one of a consumer or a consumer characteristic. The method also includes identifying, via processing circuitry, promotion transaction information associated with the electronic identification data. The promotion transaction information includes electronic data identifying at least one transaction performed using the promotion and marketing service.
    Type: Application
    Filed: November 13, 2018
    Publication date: August 29, 2019
    Inventors: Nipun Agarwal, Rajesh Girish Parekh, Ying Chen
  • Patent number: 10397317
    Abstract: Embodiments comprise a distributed join processing technique that reduces the data exchanged over the network. Embodiments first evaluate the join using a partitioned parallel join based on join tuples that represent the rows that are to be joined to produce join result tuples that represent matches between rows for the join result. Embodiments fetch, over the network, projected columns from the appropriate partitions of the tables among the nodes of the system using the record identifiers from the join result tuples. To further conserve network bandwidth, embodiments perform an additional record-identifier shuffling phase based on the respective sizes of the projected columns from the relations involved in the join operation. Specifically, the result tuples are shuffled such that transmitting projected columns from the join relation with the larger payload is avoided and the system need only exchange, over the network, projected columns from the join relation with the smaller payload.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: August 27, 2019
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Cagri Balkesen, Sam Idicula, Nipun Agarwal
  • Publication number: 20190251194
    Abstract: Techniques related to code dictionary generation based on non-blocking operations are disclosed. In some embodiments, a column of tokens includes a first token and a second token that are stored in separate rows. The column of tokens is correlated with a set of row identifiers including a first row identifier and a second row identifier that is different from the first row identifier. Correlating the column of tokens with the set of row identifiers involves: storing a correlation between the first token and the first row identifier, storing a correlation between the second token and the second row identifier if the first token and the second token have different values, and storing a correlation between the second token and the first row identifier if the first token and the second token have identical values. After correlating the column of tokens with the set of row identifiers, duplicate correlations are removed.
    Type: Application
    Filed: February 15, 2018
    Publication date: August 15, 2019
    Inventors: Pit Fender, Felix Schmidt, Benjamin Schlegel, Matthias Brantner, Nipun Agarwal
  • Publication number: 20190244139
    Abstract: Techniques are provided herein for optimal initialization of value ranges of machine learning algorithm hyperparameters and other predictions based on dataset meta-features. In an embodiment for each particular hyperparameter of a machine learning algorithm, a computer invokes, based on an inference dataset, a distinct trained metamodel for the particular hyperparameter to detect an improved subrange of possible values for the particular hyperparameter. The machine learning algorithm is configured based on the improved subranges of possible values for the hyperparameters. The machine learning algorithm is invoked to obtain a result. In an embodiment, a gradient-based search space reduction (GSSR) finds an optimal value within the improved subrange of values for the particular hyperparameter. In an embodiment, the metamodel is trained based on performance data from exploratory sampling of configuration hyperspace, such as by GSSR.
    Type: Application
    Filed: March 7, 2018
    Publication date: August 8, 2019
    Inventors: VENKATANATHAN VARADARAJAN, SANDEEP AGRAWAL, SAM IDICULA, NIPUN AGARWAL
  • Patent number: 10366124
    Abstract: Techniques are described herein for grouping of operations in local memory of a processing unit. The techniques involve adding a first operation for a first leaf operator of a query execution plan to a first pipelined group. The query execution plan includes a set of leaf operators and a set of non-leaf operators. Each leaf operator of the set of one or more leaf operators has a respective parent non-leaf operator and each non-leaf operator has one or more child operators from among the set of leaf operators or others of the set of non-leaf operators. The techniques further involve determining a memory requirement of executing the first operation for the first leaf operator and executing a second operation for the respective parent non-leaf operator of the first leaf operator. The output of the first operation is input to the second operation. The techniques further involve determining whether the memory requirement is satisfied by an amount of local memory.
    Type: Grant
    Filed: June 7, 2017
    Date of Patent: July 30, 2019
    Assignee: Oracle International Corporation
    Inventors: Jian Wen, Sam Idicula, Nitin Kunal, Negar Koochakzadeh, Seema Sundara, Thomas Chang, Aarti Basant, Nipun Agarwal, Farhan Tauheed
  • Publication number: 20190205446
    Abstract: Techniques related to distributed relational dictionaries are disclosed. In some embodiments, one or more non-transitory storage media store a sequence of instructions which, when executed by one or more computing devices, cause performance of a method. The method involves generating, by a query optimizer at a distributed database system (DDS), a query execution plan (QEP) for generating a code dictionary and a column of encoded database data. The QEP specifies a sequence of operations for generating the code dictionary. The code dictionary is a database table. The method further involves receiving, at the DDS, a column of unencoded database data from a data source that is external to the DDS. The DDS generates the code dictionary according to the QEP. Furthermore, based on joining the column of unencoded database data with the code dictionary, the DDS generates the column of encoded database data according to the QEP.
    Type: Application
    Filed: January 3, 2018
    Publication date: July 4, 2019
    Inventors: Anantha Kiran Kandukuri, Seema Sundara, Sam Idicula, Pit Fender, Nitin Kunal, Sabina Petride, Georgios Giannikis, Nipun Agarwal
  • Publication number: 20190197159
    Abstract: Techniques are described for generation of an efficient hash table for probing during join operations. A node stores a partition and generates a hash table that includes a hash bucket array and a link array, where the link array is index aligned to the partition. Each hash bucket element contains an offset that defines a location of a build key array element in the partition and a link array element in the link array. For a particular build key array element, the node determines a hash bucket element that corresponds to the build key array. If the hash bucket element contains an existing offset, the existing offset is copied to the link array element that corresponds to the offset of the particular build key array element and the offset for the particular build key array element is copied into the hash bucket element. When probing, the offset in a hash bucket element is used to locate a build key array element and other offsets stored in the link array for additional build key array elements.
    Type: Application
    Filed: December 22, 2017
    Publication date: June 27, 2019
    Inventors: Cagri Balkesen, Nipun Agarwal
  • Publication number: 20190188205
    Abstract: A system and method for processing a group and aggregate query on a relation are disclosed. A database system determines whether assistance of a heterogeneous system (HS) of compute nodes is beneficial in performing the query. Assuming that the relation has been partitioned and loaded into the HS, the database system determines, in a compile phase, whether the HS has the functional capabilities to assist, and whether the cost and benefit favor performing the operation with the assistance of the HS. If the cost and benefit favor using the assistance of the HS, then the system enters the execution phase. The database system starts, in the execution phase, an optimal number of parallel processes to produce and consume the results from the compute nodes of the HS. After any needed transaction consistency checks, the results of the query are returned by the database system.
    Type: Application
    Filed: February 11, 2019
    Publication date: June 20, 2019
    Inventors: Sabina Petride, Sam Idicula, Nipun Agarwal
  • Publication number: 20190155925
    Abstract: Techniques related to a sparse dictionary tree are disclosed. In some embodiments, computing device(s) execute instructions, which are stored on non-transitory storage media, for performing a method. The method comprises storing an encoding dictionary as a token-ordered tree comprising a first node and a second node, which are adjacent nodes. The token-ordered tree maps ordered tokens to ordered codes. The ordered tokens include a first token and a second token. The ordered codes include a first code and a second code, which are non-consecutive codes. The first node maps the first token to the first code. The second node maps the second token to the second code. The encoding dictionary is updated based on inserting a third node between the first node and the second node. The third node maps a third token to a third code that is greater than the first code and less than the second code.
    Type: Application
    Filed: November 21, 2017
    Publication date: May 23, 2019
    Inventors: Georgios Giannikis, Seema Sundara, Sabina Petride, Nipun Agarwal
  • Publication number: 20190155930
    Abstract: Techniques related to relational dictionaries are disclosed. In some embodiments, one or more non-transitory storage media store a sequence of instructions which, when executed by one or more computing devices, cause performance of a method. The method involves storing a code dictionary comprising a set of tuples. The code dictionary is a database table defined by a database dictionary and comprises columns that are each defined by the database dictionary. The set of tuples maps a set of codes to a set of tokens. The set of tokens are stored in a column of unencoded database data. The method further involves generating encoded database data based on joining the unencoded database data with the set of tuples. Furthermore, the method involves generating decoding database data based on joining the encoded database data with the set of tuples.
    Type: Application
    Filed: November 21, 2017
    Publication date: May 23, 2019
    Inventors: Pit Fender, Seema Sundara, Benjamin Schlegel, Nipun Agarwal
  • Publication number: 20190121891
    Abstract: Techniques are described herein for computing columnar information during join enumeration in a database system. The computation occurs in two phases: the first phase involves a pre-computational phase that is only run once per query block to initialize and prepare a set of data structures. The second phase is an incremental approach that takes place for every query sub-plan. Upon completion of the second phase, the generated projected attributes of a query sub-plan are associated as columnar information associated with the query sub-plan, and used to compute the query execution cost. Subsequently, based on the computed query execution cost, the query sub-plan may be executed as part of the query execution plan.
    Type: Application
    Filed: October 24, 2017
    Publication date: April 25, 2019
    Inventors: Pit Fender, Benjamin Schlegel, Nipun Agarwal
  • Publication number: 20190121893
    Abstract: Techniques are described herein for introducing transcode operators into a generated operator tree during query processing. Setting up the transcode operators with correct encoding type at runtime is performed by inferring correct encoding type information during compile time. The inference of the correct encoding type information occurs in three phases during compile time: the first phase involves collecting, consolidating, and propagating the encoding-type information of input columns up the expression tree. The second phase involves pushing the encoding-type information down the tree for nodes in the expression tree that do not yet have any encoding-type assigned. The third phase involves determining which inputs to the current relational operator need to be pre-processed by a transcode operator.
    Type: Application
    Filed: October 24, 2017
    Publication date: April 25, 2019
    Inventors: Pit Fender, Sam Idicula, Nipun Agarwal, Benjamin Schlegel
  • Patent number: 10263893
    Abstract: Techniques are provided for using decentralized lock synchronization to increase network throughput. In an embodiment, a first computer sends, to a second computer comprising a lock, a request to acquire the lock. In response to receiving the lock acquisition request, the second computer detects whether the lock is available. If the lock is unavailable, then the second computer replies by sending a denial to the first computer. Otherwise, the second computer sends an exclusive grant of the lock to the first computer. While the first computer has acquired the lock, the first computer sends data to the second computer. Afterwards, the first computer sends a request to release the lock to the second computer. This completes one duty cycle of the lock, and the lock is again available for acquisition.
    Type: Grant
    Filed: December 7, 2016
    Date of Patent: April 16, 2019
    Assignee: Oracle International Corporation
    Inventors: Vikas Aggarwal, Ankur Arora, Sam Idicula, Nipun Agarwal
  • Publication number: 20190104175
    Abstract: Embodiments comprise a distributed join processing technique that reduces the data exchanged over the network. Embodiments first evaluate the join using a partitioned parallel join based on join tuples that represent the rows that are to be joined to produce join result tuples that represent matches between rows for the join result. Embodiments fetch, over the network, projected columns from the appropriate partitions of the tables among the nodes of the system using the record identifiers from the join result tuples. To further conserve network bandwidth, embodiments perform an additional record-identifier shuffling phase based on the respective sizes of the projected columns from the relations involved in the join operation. Specifically, the result tuples are shuffled such that transmitting projected columns from the join relation with the larger payload is avoided and the system need only exchange, over the network, projected columns from the join relation with the smaller payload.
    Type: Application
    Filed: September 29, 2017
    Publication date: April 4, 2019
    Inventors: Cagri Balkesen, Sam Idicula, Nipun Agarwal
  • Publication number: 20190095399
    Abstract: Techniques are described herein for performing efficient matrix multiplication in architectures with scratchpad memories or associative caches using asymmetric allocation of space for the different matrices. The system receives a left matrix and a right matrix. In an embodiment, the system allocates, in a scratchpad memory, asymmetric memory space for tiles for each of the two matrices as well as a dot product matrix. The system proceeds with then performing dot product matrix multiplication involving the tiles of the left and the right matrices, storing resulting dot product values in corresponding allocated dot product matrix tiles. The system then proceeds to write the stored dot product values from the scratchpad memory into main memory.
    Type: Application
    Filed: September 26, 2017
    Publication date: March 28, 2019
    Inventors: Gaurav Chadha, Sam Idicula, Sandeep Agrawal, Nipun Agarwal
  • Publication number: 20190095756
    Abstract: Techniques are provided for selection of machine learning algorithms based on performance predictions by trained algorithm-specific regressors. In an embodiment, a computer derives meta-feature values from an inference dataset by, for each meta-feature, deriving a respective meta-feature value from the inference dataset. For each trainable algorithm and each regression meta-model that is respectively associated with the algorithm, a respective score is calculated by invoking the meta-model based on at least one of: a respective subset of meta-feature values, and/or hyperparameter values of a respective subset of hyperparameters of the algorithm. The algorithm(s) are selected based on the respective scores. Based on the inference dataset, the selected algorithm(s) may be invoked to obtain a result. In an embodiment, the trained regressors are distinctly configured artificial neural networks. In an embodiment, the trained regressors are contained within algorithm-specific ensembles.
    Type: Application
    Filed: January 30, 2018
    Publication date: March 28, 2019
    Inventors: Sandeep Agrawal, Sam Idicula, Venkatanathan Varadarajan, Nipun Agarwal