Patents by Inventor Manoj NAMBIAR

Manoj NAMBIAR has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230412714
    Abstract: This disclosure relates to time sensitive processing of TCP segments into application layer messages in FPGA. Certain applications such as “stock market” or “ticket booking system” require a time sensitive ordering of the transaction, as the timing of arrival of transaction (packet) will impact the result, wherein the time sensitive ordering occurs when a first packet reaching the application network is processed first or the processing of packets by the server is guaranteed in the order of packets received. However, the existing systems do not honor the time due to the layered network stack. The disclosure is a design and implementation of a middleware framework on FPGA platform which delivers messages to the application in the order in which they arrive. The disclosure enables time sensitive analysis of each message of the TCP segment based on the session-based information to re-assemble the plurality of messages in a time-sensitive queue.
    Type: Application
    Filed: June 15, 2023
    Publication date: December 21, 2023
    Applicant: Tata Consultancy Services Limited
    Inventors: Dhaval SHAH, Manoj Nambiar, Ishtiyaque Shaikh
  • Patent number: 11748292
    Abstract: Various embodiments disclosed herein provides method and system for low latency FPGA based system for inference such as recommendation models. Conventional models for inference have high latency and low throughput in decision making models/processes. The disclosed method and system exploits parallelism in processing of XGB models and hence enables minimum possible latency and maximum possible throughput. Additionally, the disclosed system uses a trained model that is (re)trained using only those features which the model had used during training, remaining features are discarded during retraining of the model. The use of such selected set of features thus leads to reduction in the size of digital circuit significantly for the hardware implementation, thereby greatly enhancing the system performance.
    Type: Grant
    Filed: October 1, 2021
    Date of Patent: September 5, 2023
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Piyush Manavar, Manoj Nambiar
  • Patent number: 11736594
    Abstract: A method and system of a low-latency FPGA framework based on reliable UDP and TCP re-assembly middleware is disclosed. The need for low-latency communication in digital systems has increased drastically. The disclosed FPGA framework enables low-latency communication as a hybrid framework that supports both UDP & TCP communication. As known in art, TCP provides error checking support hence making TCP more reliable as compared to UDP, while UDP is faster but not reliable. Hence the disclosed low-latency FPGA framework latency utilizes the advantage of both UDP and TCP by utilizing UDP for its speed, while switching to TCP in case of a missing sequence in UDP. Further, the disclosed system proposes a TCP re-assembly middleware architecture for processing TCP with a lower-latency, wherein the TCP re-assembly middleware is an independent middleware that is a modular and a plug-play independent middleware.
    Type: Grant
    Filed: June 16, 2021
    Date of Patent: August 22, 2023
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Dhaval Shah, Sunil Puranik, Manoj Nambiar, Mahesh Damodar Barve, Ishtiyaque Shaikh, Piyush Manavar, Sharyu Vijay Mukhekar
  • Patent number: 11669314
    Abstract: This disclosure generally relates to high-level synthesis (HLS) platforms, and, more particularly, enable print functionality in high-level synthesis (HLS) platforms. The recent availability FPGA-HLS is a great success due to availability of compilers for FPGAs as opposed to hardware description languages (HDLs) that requires special skills. However, the compilers within the HLS design platform includes limited support for all the standard libraries, wherein features like print functionality is not supported. The invention discloses techniques to enable print functionality in HLS design platforms based on source-to-source transformations and stream combining scheme. In addition to enabling print functionality, the invention also discloses a formatter technique to receive-format FPGA data into human interpretable data.
    Type: Grant
    Filed: December 27, 2021
    Date of Patent: June 6, 2023
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Nupur Sumeet, Manoj Nambiar
  • Patent number: 11640542
    Abstract: The disclosure generally relates to system architectures, and, more particularly, to a method and system for system architecture recommendation. In existing scenario, a solution architect often gets minimum details about requirements, hence struggles to design a system architecture that matches the requirements. The method and system disclosed herein are to provide system recommendation in response to requirements provided as input to the system. The system generates an acyclic dependency graph based on parameters and values extracted from an obtained user input. The system then identifies a reference architectures that matches the requirements, and further selects components that match the architecture requirements. The system further selects technologies considering inter-operability of the technologies. Further, the system generates architecture recommendations for the user, based on the selected components, and technologies.
    Type: Grant
    Filed: March 20, 2019
    Date of Patent: May 2, 2023
    Assignee: Tata Consultancy Limited Services
    Inventors: Shruti Kunde, Chetan Phalak, Rekha Singhal, Manoj Nambiar
  • Publication number: 20230122192
    Abstract: This disclosure relates generally to a method and a system for computing using a field programmable gate array (FPGA) neuromorphic architecture. Implementing energy efficient Artificial Intelligence (AI) applications at power constrained environment/devices is challenging due to huge energy consumption during both training and inferencing. The disclosure is a FPGA architecture based neuromorphic computing platform, the basic components include a plurality of neurons and memory. The FPGA neuromorphic architecture is parameterized, parallel and modular, thus enabling improved energy/inference and Latency-Throughput. Based on values of the plurality of features of the data set, the FPGA neuromorphic architecture is generated in a modular and parallel fashion. The output of the disclosed FPGA neuromorphic architecture is the plurality of output spikes from the neuron, which becomes the basis of inference for computing.
    Type: Application
    Filed: March 2, 2022
    Publication date: April 20, 2023
    Applicant: Tata Consultancy Services Limited
    Inventors: Dhaval SHAH, Sounak DEY, Meripe Ajay KUMAR, Manoj NAMBIAR, Arpan PAL
  • Patent number: 11611638
    Abstract: A method and system of a re-assembly middleware in FPGA for processing TCP segments into application layer messages is disclosed. In recent years, the communication speed in digital systems has increased drastically and thus has brought in a growing need to ensure a good/high performance from the FPGA services. The disclosure proposes a re-assembly middleware in the FPGA for processing TCP segments into application layer messages at a pre-defined frequency for a good/high performance. The pre-defined frequency is a high frequency performance feature of the re-assembly middleware, wherein the FPGA's implementation frequency is at atleast 300 MHz based on a memory optimization technique. The memory optimization technique includes several strategies such registering an output and slicing memories.
    Type: Grant
    Filed: March 22, 2021
    Date of Patent: March 21, 2023
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Dhaval Shah, Sunil Puranik, Manoj Nambiar, Mahesh Damodar Barve, Ishtiyaque Shaikh
  • Publication number: 20220350580
    Abstract: This disclosure generally relates to high-level synthesis (HLS) platforms, and, more particularly, enable print functionality in high-level synthesis (HLS) platforms. The recent availability FPGA-HLS is a great success due to availability of compilers for FPGAs as opposed to hardware description languages (HDLs) that requires special skills. However, the compilers within the HLS design platform includes limited support for all the standard libraries, wherein features like print functionality is not supported. The invention discloses techniques to enable print functionality in HLS design platforms based on source-to-source transformations and stream combining scheme. In addition to enabling print functionality, the invention also discloses a formatter technique to receive-format FPGA data into human interpretable data.
    Type: Application
    Filed: December 27, 2021
    Publication date: November 3, 2022
    Applicant: Tata Consultancy Services Limited
    Inventors: Nupur SUMEET, Manoj NAMBIAR
  • Publication number: 20220311839
    Abstract: A method and system of a low-latency FPGA framework based on reliable UDP and TCP re-assembly middleware is disclosed. The need for low-latency communication in digital systems has increased drastically. The disclosed FPGA framework enables low-latency communication as a hybrid framework that supports both UDP & TCP communication. As known in art, TCP provides error checking support hence making TCP more reliable as compared to UDP, while UDP is faster but not reliable. Hence the disclosed low-latency FPGA framework latency utilizes the advantage of both UDP and TCP by utilizing UDP for its speed, while switching to TCP in case of a missing sequence in UDP. Further, the disclosed system proposes a TCP re-assembly middleware architecture for processing TCP with a lower-latency, wherein the TCP re-assembly middleware is an independent middleware that is a modular and a plug-play independent middleware.
    Type: Application
    Filed: June 16, 2021
    Publication date: September 29, 2022
    Applicant: Tata Consultancy Services Limited
    Inventors: Dhaval SHAH, Sunil PURANIK, Manoj NAMBIAR, Mahesh Damodar Barve, Ishtiyaque Shaikh, Piyush Manavar, Sharyu Vijay Mukhekar
  • Patent number: 11449413
    Abstract: This disclosure relates generally to accelerating development and deployment of enterprise applications where the applications involve both data driven and task driven components in data driven enterprise information technology (IT) systems. The disclosed system is capable of determining components of the application that may be task-driven and/or those components which may be data-driven using inputs such as business use case, data sources and requirements specifications. The system is capable of determining the components that may be developed using task-driven and data-drive paradigms and enables migration of components from the task driven paradigm to the data driven paradigm. Also, the system trains a reinforcement learning (RL) model for facilitating migration of the identified components from the task driven paradigm to the data driven paradigm. The system is further capable of integrating the migrated and existing components to accelerate development and deployment an integrated IT application.
    Type: Grant
    Filed: June 11, 2021
    Date of Patent: September 20, 2022
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Rekha Singhal, Gautam Shroff, Dheeraj Chahal, Mayank Mishra, Shruti Kunde, Manoj Nambiar
  • Publication number: 20220272178
    Abstract: A method and system of a re-assembly middleware in FPGA for processing TCP segments into application layer messages is disclosed. In recent years, the communication speed in digital systems has increased drastically and thus has brought in a growing need to ensure a good/high performance from the FPGA services. The disclosure proposes a re-assembly middleware in the FPGA for processing TCP segments into application layer messages at a pre-defined frequency for a good/high performance. The pre-defined frequency is a high frequency performance feature of the re-assembly middleware, wherein the FPGA's implementation frequency is at atleast 300 MHz based on a memory optimization technique. The memory optimization technique includes several strategies such registering an output and slicing memories.
    Type: Application
    Filed: March 22, 2021
    Publication date: August 25, 2022
    Applicant: Tata Consultancy Services Limited
    Inventors: Dhaval SHAH, Sunil PURANIK, Manoj NAMBIAR, Mahesh Damodar BARVE, Ishtiyaque SHAIKH
  • Publication number: 20220237141
    Abstract: Various embodiments disclosed herein provides method and system for low latency FPGA based system for inference such as recommendation models. Conventional models for inference have high latency and low throughput in decision making models/processes. The disclosed method and system exploits parallelism in processing of XGB models and hence enables minimum possible latency and maximum possible throughput. Additionally, the disclosed system uses a trained model that is (re)trained using only those features which the model had used during training, remaining features are discarded during retraining of the model. The use of such selected set of features thus leads to reduction in the size of digital circuit significantly for the hardware implementation, thereby greatly enhancing the system performance.
    Type: Application
    Filed: October 1, 2021
    Publication date: July 28, 2022
    Applicant: Tata Consultancy Services Limited
    Inventors: Piyush MANAVAR, Manoj NAMBIAR
  • Publication number: 20220092354
    Abstract: This disclosure relates generally to a method and system for generating labelled dataset using a training data recommender technique. Recommender systems face major challenges in handling dynamic data on machine learning paradigms thereby rendering inaccurate unlabeled dataset. The method of the present disclosure is based on a training data recommender technique suitably constructed with a newly defined parameter such as the labelled data prediction threshold to determine the adequate amount of labelled training data required for training the one or more machine learning models. The method processes the received unlabeled dataset for labelling the unlabeled dataset based on a labelled data prediction threshold which is determined using a trained training data recommender technique.
    Type: Application
    Filed: September 10, 2021
    Publication date: March 24, 2022
    Applicant: Tata Consultancy Services Limited
    Inventors: Shruti Kunde, Mayank Mishra, Rekha Singhal, Amey Pandit, Manoj Nambiar, Gautam Shroff
  • Publication number: 20220083492
    Abstract: Conventionally, for processing multi-legged orders, matching engines were implemented in software and were connected through Ethernet which is very slow in terms of throughput. Such traditional trading systems failed to process orders of tokens on different machines and these were summarily rejected. Present disclosure provides multiple FPGA system being optimized for processing/executing multi-legged orders. The system includes a plurality of FPGAs which are interconnected for communication via a PCIe port of a multi-port PCIe switch. Each FPGA comprise a net processing layer, a matcher, and a look-up table. Each FPGA is configured to process tokens (e.g., securities, etc.). If orders to be processed are for tokens on same FPGA where the order is received, then tokens are processed locally. Else net processing layer of a specific FPGA routes to specific order request to another FPGA where the tokens (securities) are located thereby reducing the latency and improving overall throughput.
    Type: Application
    Filed: December 29, 2020
    Publication date: March 17, 2022
    Applicant: Tata Consultancy Services Limited
    Inventors: Mahesh Damodar BARVE, Sunil PURANIK, Swapnil RODI, Manoj NAMBIAR, Dhaval SHAH
  • Patent number: 11263164
    Abstract: Conventionally, for processing multi-legged orders, matching engines were implemented in software and were connected through Ethernet which is very slow in terms of throughput. Such traditional trading systems failed to process orders of tokens on different machines and these were summarily rejected. Present disclosure provides multiple FPGA system being optimized for processing/executing multi-legged orders. The system includes a plurality of FPGAs which are interconnected for communication via a PCIe port of a multi-port PCIe switch. Each FPGA comprise a net processing layer, a matcher, and a look-up table. Each FPGA is configured to process tokens (e.g., securities, etc.). If orders to be processed are for tokens on same FPGA where the order is received, then tokens are processed locally. Else net processing layer of a specific FPGA routes to specific order request to another FPGA where the tokens (securities) are located thereby reducing the latency and improving overall throughput.
    Type: Grant
    Filed: December 29, 2020
    Date of Patent: March 1, 2022
    Assignee: Tata Consultancy Services Lmited
    Inventors: Mahesh Damodar Barve, Sunil Puranik, Swapnil Rodi, Manoj Nambiar, Dhaval Shah
  • Patent number: 11263203
    Abstract: Data processing and storage is an important part of a number of applications. Conventional data processing and storage systems utilize either a full array structure or a full linked list structure for storing data wherein the array consumes large amount of memory and linked list provides slow processing. Thus, conventional systems and methods are not capable of providing simultaneous optimization of memory consumption and time efficiency. The present disclosure provides an efficient way of storing data by creating an integrated array and linked list based structure. The data is stored in the integrated array and linked list based structure by using a delta based mechanism. The delta based mechanism helps in determining the location in the integrated array and linked list based structure where the data should be stored. The present disclosure incorporates the advantages of both array and linked list structure resulting in reduced memory consumption and latency.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: March 1, 2022
    Assignee: Tata Consultancy Services Limited
    Inventors: Mahesh Damodar Barve, Sunil Anant Puranik, Manoj Nambiar, Swapnil Rodi
  • Publication number: 20210390033
    Abstract: This disclosure relates generally to accelerating development and deployment of enterprise applications where the applications involve both data driven and task driven components in data driven enterprise information technology (IT) systems. The disclosed system is capable of determining components of the application that may be task-driven and/or those components which may be data-driven using inputs such as business use case, data sources and requirements specifications. The system is capable of determining the components that may be developed using task-driven and data-drive paradigms and enables migration of components from the task driven paradigm to the data driven paradigm. Also, the system trains a reinforcement learning (RL) model for facilitating migration of the identified components from the task driven paradigm to the data driven paradigm. The system is further capable of integrating the migrated and existing components to accelerate development and deployment an integrated IT application.
    Type: Application
    Filed: June 11, 2021
    Publication date: December 16, 2021
    Applicant: Tata Consultancy Services Limited
    Inventors: Rekha SINGHAL, Gautam SHROFF, Dheeraj CHAHAL, Mayank MISHRA, Shruti KUNDE, Manoj NAMBIAR
  • Publication number: 20200133942
    Abstract: Data processing and storage is an important part of a number of applications. Conventional data processing and storage systems utilize either a full array structure or a full linked list structure for storing data wherein the array consumes large amount of memory and linked list provides slow processing. Thus, conventional systems and methods are not capable of providing simultaneous optimization of memory consumption and time efficiency. The present disclosure provides an efficient way of storing data by creating an integrated array and linked list based structure. The data is stored in the integrated array and linked list based structure by using a delta based mechanism. The delta based mechanism helps in determining the location in the integrated array and linked list based structure where the data should be stored. The present disclosure incorporates the advantages of both array and linked list structure resulting in reduced memory consumption and latency.
    Type: Application
    Filed: October 25, 2019
    Publication date: April 30, 2020
    Applicant: Tata Consultancy Services Limited
    Inventors: Mahesh Damodar BARVE, Sunil Anant PURANIK, Manoj NAMBIAR, Swapnil RODI
  • Publication number: 20190294977
    Abstract: The disclosure generally relates to system architectures, and, more particularly, to a method and system for system architecture recommendation. In existing scenario, a solution architect often gets minimum details about requirements, hence struggles to design a system architecture that matches the requirements. The method and system disclosed herein are to provide system recommendation in response to requirements provided as input to the system. The system generates an acyclic dependency graph based on parameters and values extracted from an obtained user input. The system then identifies a reference architectures that matches the requirements, and further selects components that match the architecture requirements. The system further selects technologies considering inter-operability of the technologies. Further, the system generates architecture recommendations for the user, based on the selected components, and technologies.
    Type: Application
    Filed: March 20, 2019
    Publication date: September 26, 2019
    Applicant: Tata Consultancy Services Limited
    Inventors: Shruti KUNDE, Chetan PHALAK, Rekha SINGHAL, Manoj NAMBIAR