Patents by Inventor Shashi Thanikella
Shashi Thanikella has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11151099Abstract: A data structure management system includes a first database, a second database, and a processing engine. The first database includes a first file with a first term and a corresponding first metadata, and a second file with the first term and a corresponding second metadata. The processing engine extracts the first file and the second file from the first database in a first format. It links the first term with the first metadata from the first file and the second metadata from the second file. It transforms the extracted first file and second file from the first format into a second format while maintaining the link between the first term, the first metadata, and the second metadata. It then exports the transformed first file and second file to a second database in the second format with the link between the first term, the first metadata, and the second metadata intact.Type: GrantFiled: May 26, 2020Date of Patent: October 19, 2021Assignee: Bank of America CorporationInventors: Brad E. Romano, Shashi Thanikella
-
Patent number: 10985975Abstract: A parallel processing device includes a parallel processing engine implemented by a processor. The parallel processing engine is configured to execute a shell script for each particular processing job in a queue of processing jobs to run. The shell script is configured to dynamically generate a configuration file for each particular processing job. The configuration file instructs a network of computing systems to run the particular processing job using a particular number of parallel partitions corresponding to a parallel partitions parameter associated with the particular job. The configuration file includes randomized scratch directories for computing nodes within the network of computing systems and a calculated container size for the particular processing job. Each processing job is run on the network of computing systems according to the dynamically-generated configuration file of the particular processing job.Type: GrantFiled: April 27, 2020Date of Patent: April 20, 2021Assignee: Bank of America CorporationInventors: Brad E. Romano, Shashi Thanikella
-
Patent number: 10795880Abstract: A system for communication between two or more computer programs is disclosed. The system includes a memory, an interface, and a processor. The memory stores a first file, expected metadata for the first file, and expected metadata for one or more fields in the first file. The interface receives a file from a computer program. The file comprises fields that each comprise information provided by one or more sources. The processor executes a second computer program which extracts a first set of file metadata from the received file, compares the extracted first set of file metadata to the expected metadata, and determines if the extracted first set of file metadata corresponds to the expected metadata. If the extracted first set of file metadata corresponds to the expected metadata for the first file, then the processor performs analogous comparisons at a field level and stores the first file in the memory.Type: GrantFiled: October 28, 2019Date of Patent: October 6, 2020Assignee: Bank of America CorporationInventors: Brad E. Romano, Shashi Thanikella
-
Publication number: 20200285622Abstract: A data structure management system includes a first database, a second database, and a processing engine. The first database includes a first file with a first term and a corresponding first metadata, and a second file with the first term and a corresponding second metadata. The processing engine extracts the first file and the second file from the first database in a first format. It links the first term with the first metadata from the first file and the second metadata from the second file. It transforms the extracted first file and second file from the first format into a second format while maintaining the link between the first term, the first metadata, and the second metadata. It then exports the transformed first file and second file to a second database in the second format with the link between the first term, the first metadata, and the second metadata intact.Type: ApplicationFiled: May 26, 2020Publication date: September 10, 2020Inventors: Brad E. Romano, Shashi Thanikella
-
Publication number: 20200259706Abstract: A parallel processing device includes a parallel processing engine implemented by a processor. The parallel processing engine is configured to execute a shell script for each particular processing job in a queue of processing jobs to run. The shell script is configured to dynamically generate a configuration file for each particular processing job. The configuration file instructs a network of computing systems to run the particular processing job using a particular number of parallel partitions corresponding to a parallel partitions parameter associated with the particular job. The configuration file includes randomized scratch directories for computing nodes within the network of computing systems and a calculated container size for the particular processing job. Each processing job is run on the network of computing systems according to the dynamically-generated configuration file of the particular processing job.Type: ApplicationFiled: April 27, 2020Publication date: August 13, 2020Inventors: Brad E. Romano, Shashi Thanikella
-
Patent number: 10698869Abstract: A data structure management system includes a first database, a second database, and a processing engine. The first database includes a first file with a first term and a corresponding first metadata, and a second file with the first term and a corresponding second metadata. The processing engine extracts the first file and the second file from the first database in a first format. It links the first term with the first metadata from the first file and the second metadata from the second file. It transforms the extracted first file and second file from the first format into a second format while maintaining the link between the first term, the first metadata, and the second metadata. It then exports the transformed first file and second file to a second database in the second format with the link between the first term, the first metadata, and the second metadata intact.Type: GrantFiled: September 27, 2016Date of Patent: June 30, 2020Assignee: Bank of America CorporationInventors: Brad E. Romano, Shashi Thanikella
-
Patent number: 10666510Abstract: A parallel processing device includes a parallel processing engine implemented by a processor. The parallel processing engine is configured to execute a shell script for each particular processing job in a queue of processing jobs to run. The shell script is configured to dynamically generate a configuration file for each particular processing job. The configuration file instructs a network of computing systems to run the particular processing job using a particular number of parallel partitions corresponding to a parallel partitions parameter associated with the particular job. The configuration file includes randomized scratch directories for computing nodes within the network of computing systems and a calculated container size for the particular processing job. Each processing job is run on the network of computing systems according to the dynamically-generated configuration file of the particular processing job.Type: GrantFiled: October 30, 2018Date of Patent: May 26, 2020Assignee: Bank of America CorporationInventors: Brad E. Romano, Shashi Thanikella
-
Publication number: 20200136899Abstract: A parallel processing device includes a parallel processing engine implemented by a processor. The parallel processing engine is configured to execute a shell script for each particular processing job in a queue of processing jobs to run. The shell script is configured to dynamically generate a configuration file for each particular processing job. The configuration file instructs a network of computing systems to run the particular processing job using a particular number of parallel partitions corresponding to a parallel partitions parameter associated with the particular job. The configuration file includes randomized scratch directories for computing nodes within the network of computing systems and a calculated container size for the particular processing job. Each processing job is run on the network of computing systems according to the dynamically-generated configuration file of the particular processing job.Type: ApplicationFiled: October 30, 2018Publication date: April 30, 2020Inventors: Brad E. Romano, Shashi Thanikella
-
Publication number: 20200089845Abstract: A system for communication between two or more computer programs is disclosed. The system includes a memory, an interface, and a processor. The memory stores a first file, expected metadata for the first file, and expected metadata for one or more fields in the first file. The interface receives a file from a computer program. The file comprises fields that each comprise information provided by one or more sources. The processor executes a second computer program which extracts a first set of file metadata from the received file, compares the extracted first set of file metadata to the expected metadata, and determines if the extracted first set of file metadata corresponds to the expected metadata. If the extracted first set of file metadata corresponds to the expected metadata for the first file, then the processor performs analogous comparisons at a field level and stores the first file in the memory.Type: ApplicationFiled: October 28, 2019Publication date: March 19, 2020Inventors: Brad E. Romano, Shashi Thanikella
-
Patent number: 10481836Abstract: An adaptive machine learning system for predicting file controls includes a memory, an interface, and a processor. The memory stores a plurality of controls for incoming files and the interface receives a first file and a second file. The first file has a first property and the second file has a second property. The processor determines a type for each of the first property and the second property, wherein the type of each property is related to a first file control. The processor also determines that the first property and the second property each satisfy the first file control. If the value of the first property and the second property are above a first threshold, the processor changes a value of the first control for incoming files.Type: GrantFiled: September 27, 2016Date of Patent: November 19, 2019Assignee: Bank of America CorporationInventors: Brad E. Romano, John R. Sampson, Shashi Thanikella
-
Patent number: 10459911Abstract: A system for communication between two or more computer programs is disclosed. The system includes a memory, an interface, and a processor. The memory stores a first file, expected metadata for the first file, and expected metadata for one or more fields in the first file. The interface receives a file from a computer program. The file comprises fields that each comprise information provided by one or more sources. The processor executes a second computer program which extracts a first set of file metadata from the received file, compares the extracted first set of file metadata to the expected metadata, and determines if the extracted first set of file metadata corresponds to the expected metadata. If the extracted first set of file metadata corresponds to the expected metadata for the first file, then the processor performs analogous comparisons at a field level and stores the first file in the memory.Type: GrantFiled: September 27, 2016Date of Patent: October 29, 2019Assignee: Bank of America CorporationInventors: Brad E. Romano, Shashi Thanikella
-
Patent number: 10255338Abstract: A system for file management in data structures is disclosed. The system includes a memory, an extraction engine, an enrichment engine, a portal, and a transmission engine. The memory stores a first database and a memory. The extraction engine extracts columns and fields and associates them with extracted terms. The enrichment engine determines an end-to-end dataflow of the data from extracted metadata and loads the associated data into the memory. Enrichment engine performs either a full load comprising loading all the associated data onto the memory or a delta load comprising comparing the extracted data with data stored in the memory and loading any different data from the extracted data onto the memory. The enrichment engine also generates a journal recording metadata associated with the full or delta load. A portal displays the end-to-end dataflow of the associated data and a transmission engine transmits communication identifying incomplete associated data.Type: GrantFiled: September 27, 2016Date of Patent: April 9, 2019Assignee: Bank of America CorporationInventors: Brad E. Romano, Shashi Thanikella
-
Publication number: 20180089185Abstract: A data structure management system includes a first database, a second database, and a processing engine. The first database includes a first file with a first term and a corresponding first metadata, and a second file with the first term and a corresponding second metadata. The processing engine extracts the first file and the second file from the first database in a first format. It links the first term with the first metadata from the first file and the second metadata from the second file. It transforms the extracted first file and second file from the first format into a second format while maintaining the link between the first term, the first metadata, and the second metadata. It then exports the transformed first file and second file to a second database in the second format with the link between the first term, the first metadata, and the second metadata intact.Type: ApplicationFiled: September 27, 2016Publication date: March 29, 2018Inventors: Brad E. Romano, Shashi Thanikella
-
Publication number: 20180089579Abstract: An adaptive machine learning system for predicting file controls includes a memory, an interface, and a processor. The memory stores a plurality of controls for incoming files and the interface receives a first file and a second file. The first file has a first property and the second file has a second property. The processor determines a type for each of the first property and the second property, wherein the type of each property is related to a first file control. The processor also determines that the first property and the second property each satisfy the first file control. If the value of the first property and the second property are above a first threshold, the processor changes a value of the first control for incoming files.Type: ApplicationFiled: September 27, 2016Publication date: March 29, 2018Inventors: BRAD E. ROMANO, JOHN R. SAMPSON, SHASHI THANIKELLA
-
Publication number: 20180089251Abstract: A system for communication between two or more computer programs is disclosed. The system includes a memory, an interface, and a processor. The memory stores a first file, expected metadata for the first file, and expected metadata for one or more fields in the first file. The interface receives a file from a computer program. The file comprises fields that each comprise information provided by one or more sources. The processor executes a second computer program which extracts a first set of file metadata from the received file, compares the extracted first set of file metadata to the expected metadata, and determines if the extracted first set of file metadata corresponds to the expected metadata. If the extracted first set of file metadata corresponds to the expected metadata for the first file, then the processor performs analogous comparisons at a field level and stores the first file in the memory.Type: ApplicationFiled: September 27, 2016Publication date: March 29, 2018Inventors: Brad E. Romano, Shashi Thanikella
-
Publication number: 20180089293Abstract: A system for file management in data structures is disclosed. The system includes a memory, an extraction engine, an enrichment engine, a portal, and a transmission engine. The memory stores a first database and a memory. The extraction engine extracts columns and fields and associates them with extracted terms. The enrichment engine determines an end-to-end dataflow of the data from extracted metadata and loads the associated data into the memory. Enrichment engine performs either a full load comprising loading all the associated data onto the memory or a delta load comprising comparing the extracted data with data stored in the memory and loading any different data from the extracted data onto the memory. The enrichment engine also generates a journal recording metadata associated with the full or delta load. A portal displays the end-to-end dataflow of the associated data and a transmission engine transmits communication identifying incomplete associated data.Type: ApplicationFiled: September 27, 2016Publication date: March 29, 2018Inventors: Brad E. Romano, Shashi Thanikella